modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
mkddatascience/medical-llm
|
mkddatascience
| 2024-01-24T21:34:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T21:34:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LogischeIP/SentimentT2_GPT2
|
LogischeIP
| 2024-01-24T21:32:48Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T19:51:59Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: SentimentT2_GPT2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SentimentT2_GPT2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0308
- Accuracy: 0.8644
- F1: 0.8685
- Auc Roc: 0.9297
- Log Loss: 1.0307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Auc Roc | Log Loss |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:-------:|:--------:|
| 1.1785 | 0.15 | 500 | 0.7334 | 0.8346 | 0.8400 | 0.9144 | 0.7334 |
| 1.1409 | 0.31 | 1000 | 0.8797 | 0.8520 | 0.8649 | 0.9269 | 0.8796 |
| 1.0906 | 0.46 | 1500 | 0.7869 | 0.8744 | 0.8805 | 0.9394 | 0.7869 |
| 1.0163 | 0.62 | 2000 | 0.8381 | 0.8706 | 0.8771 | 0.9366 | 0.8381 |
| 1.0602 | 0.77 | 2500 | 0.9904 | 0.8458 | 0.8616 | 0.9253 | 0.9904 |
| 1.1456 | 0.93 | 3000 | 0.8833 | 0.8483 | 0.8452 | 0.9275 | 0.8832 |
| 0.9662 | 1.08 | 3500 | 0.9737 | 0.8507 | 0.8618 | 0.9354 | 0.9737 |
| 0.8496 | 1.24 | 4000 | 0.9361 | 0.8619 | 0.8680 | 0.9351 | 0.9361 |
| 0.8571 | 1.39 | 4500 | 0.8660 | 0.8619 | 0.8702 | 0.9346 | 0.8660 |
| 0.7506 | 1.55 | 5000 | 0.9359 | 0.8507 | 0.8558 | 0.9316 | 0.9359 |
| 0.8236 | 1.7 | 5500 | 1.1721 | 0.8184 | 0.8433 | 0.9229 | 1.1721 |
| 0.6897 | 1.85 | 6000 | 0.9876 | 0.8532 | 0.8547 | 0.9318 | 0.9876 |
| 0.6699 | 2.01 | 6500 | 0.8947 | 0.8570 | 0.8671 | 0.9323 | 0.8946 |
| 0.6137 | 2.16 | 7000 | 0.9318 | 0.8557 | 0.8661 | 0.9344 | 0.9318 |
| 0.4646 | 2.32 | 7500 | 0.9943 | 0.8595 | 0.8660 | 0.9312 | 0.9944 |
| 0.7042 | 2.47 | 8000 | 0.9150 | 0.8657 | 0.8714 | 0.9345 | 0.9150 |
| 0.4079 | 2.63 | 8500 | 1.0215 | 0.8657 | 0.8750 | 0.9312 | 1.0214 |
| 0.4646 | 2.78 | 9000 | 0.9809 | 0.8619 | 0.8714 | 0.9310 | 0.9809 |
| 0.4707 | 2.94 | 9500 | 1.0151 | 0.8644 | 0.8719 | 0.9279 | 1.0150 |
| 0.5005 | 3.09 | 10000 | 1.0748 | 0.8607 | 0.8651 | 0.9289 | 1.0747 |
| 0.3817 | 3.24 | 10500 | 0.8819 | 0.8781 | 0.8858 | 0.9299 | 0.8818 |
| 0.279 | 3.4 | 11000 | 1.0542 | 0.8607 | 0.8627 | 0.9302 | 1.0541 |
| 0.3527 | 3.55 | 11500 | 1.0148 | 0.8607 | 0.8637 | 0.9312 | 1.0147 |
| 0.3873 | 3.71 | 12000 | 1.0421 | 0.8619 | 0.8648 | 0.9294 | 1.0420 |
| 0.3552 | 3.86 | 12500 | 1.0308 | 0.8644 | 0.8685 | 0.9297 | 1.0307 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
sally9805/bert-base-uncased-finetuned-coha-1950s
|
sally9805
| 2024-01-24T21:32:16Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-24T09:36:32Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-coha-1950s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-coha-1950s
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7863 | 1.0 | 12360 | 2.6555 |
| 2.7601 | 2.0 | 24720 | 2.6271 |
| 2.7636 | 3.0 | 37866 | 2.5433 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LarryAIDraw/MinakoAino
|
LarryAIDraw
| 2024-01-24T21:32:02Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:19:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/137594/sailor-moonsailor-venus
|
mabobe-biyong/m2m100_418M-fr-ful-rel-ft
|
mabobe-biyong
| 2024-01-24T21:31:41Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:34:27Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
---
|
LarryAIDraw/sailor_venus
|
LarryAIDraw
| 2024-01-24T21:31:41Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:19:15Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/153210/minako-ainosailor-venus-sailor-moon
|
LarryAIDraw/sailorvenus-lora-nochekaiser
|
LarryAIDraw
| 2024-01-24T21:31:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:18:56Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/191387/minako-aino-sailor-venus-sailor-moon
|
LarryAIDraw/hikari_lynette_v2
|
LarryAIDraw
| 2024-01-24T21:30:43Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:10:43Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/102387/lynette-genshin-impact-character-lora
|
LarryAIDraw/Nejire_Hero-10
|
LarryAIDraw
| 2024-01-24T21:29:46Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:09:47Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/275362/nejire-hadou-body-suit-bnha
|
YoungMeng/dqn-MsPacmanNoFrameskip-v4
|
YoungMeng
| 2024-01-24T21:29:22Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-29T18:50:00Z |
---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 1789.00 +/- 1081.22
name: mean_reward
verified: false
---
# **DQN** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga YoungMeng -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga YoungMeng -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/ -orga YoungMeng
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mabobe-biyong/m2m100_418M-fr-gho-rel-ft
|
mabobe-biyong
| 2024-01-24T21:29:22Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Ghomala",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:44:49Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Ghomala
---
|
LarryAIDraw/Yuigahama_mother__My_Youth_Romantic_Comedy_Is_Wrong_As_I_Expected
|
LarryAIDraw
| 2024-01-24T21:29:20Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:09:18Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/275188/yuigahama-mother-my-youth-romantic-comedy-is-wrong-as-i-expected
|
LarryAIDraw/metera-nvwls-v1
|
LarryAIDraw
| 2024-01-24T21:28:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:07:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/275069/metera-granblue-fantasy-lora-or-3-outfits
|
LarryAIDraw/Tenjoin_Asuka
|
LarryAIDraw
| 2024-01-24T21:27:19Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:06:30Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/274746/gx-tenjoin-asuka
|
LarryAIDraw/Raiden_Shogun__Genshin_impact
|
LarryAIDraw
| 2024-01-24T21:26:44Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:06:01Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/274724/raiden-shogun-genshin-impact
|
LarryAIDraw/MikinamiPlug-10
|
LarryAIDraw
| 2024-01-24T21:26:10Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:05:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/274072/mari-illustrious-makinami-school-uniform-neon-genesis-evangelion
|
Denox05/cassie
|
Denox05
| 2024-01-24T21:24:45Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-24T21:20:58Z |
---
license: other
license_name: rvc
license_link: LICENSE
---
|
LarryAIDraw/LoRA_MisakaImouto
|
LarryAIDraw
| 2024-01-24T21:23:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-24T21:03:21Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/275963/lora-misaka-10032-misaka-imouto-toaru-majutsu-no-index
|
dev137/NousResearch_Nous-Capybara-34B-exl2-3bpw-h8
|
dev137
| 2024-01-24T21:20:08Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"sft",
"Yi-34B-200K",
"eng",
"dataset:LDJnr/Capybara",
"dataset:LDJnr/LessWrong-Amplify-Instruct",
"dataset:LDJnr/Pure-Dove",
"dataset:LDJnr/Verified-Camel",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T21:16:59Z |
---
language:
- eng
tags:
- sft
- Yi-34B-200K
license:
- mit
datasets:
- LDJnr/Capybara
- LDJnr/LessWrong-Amplify-Instruct
- LDJnr/Pure-Dove
- LDJnr/Verified-Camel
---
## **Nous-Capybara-34B V1.9**
**This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!**
**First 34B Nous model and first 200K context length Nous model!**
The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house.
We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly regarded datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.
## Process of creation and special thank yous!
This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations.
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project.
## Thank you to those of you that have indirectly contributed!
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis.
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project.
Datasets in Blue are in-house curations that previously existed prior to Capybara.

## Prompt Format
The reccomended model usage is:
Prefix: ``USER:``
Suffix: ``ASSISTANT:``
Stop token: ``</s>``
## Mutli-Modality!
- We currently have a Multi-modal model based on Capybara V1.9!
https://huggingface.co/NousResearch/Obsidian-3B-V0.5
it is currently only available as a 3B sized model but larger versions coming!
## Notable Features:
- Uses Yi-34B model as the base which is trained for 200K context length!
- Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!)
- Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.)
- Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house)
- Ability to recall information upto late 2022 without internet.
- Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts.
## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon):



## Benchmarks! (Coming soon!)
## Future model sizes
Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama.
## How you can help!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
## Dataset contamination.
We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found.
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
The following are benchmarks we checked for contamination against our dataset:
- HumanEval
- AGIEval
- TruthfulQA
- MMLU
- GPT4All
```
@article{daniele2023amplify-instruct,
title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.},
author={Daniele, Luigi and Suphavadeeprasit},
journal={arXiv preprint arXiv:(comming soon)},
year={2023}
}
```
|
Ri7erLi/RL-learning-LunarLander-v2
|
Ri7erLi
| 2024-01-24T21:16:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T21:16:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.30 +/- 23.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
marconardone/Mistral-7B-Trump-Edition
|
marconardone
| 2024-01-24T21:08:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T23:20:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
linoyts/2000_ads_offset_noise_micro
|
linoyts
| 2024-01-24T21:08:40Z | 54 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-24T20:17:57Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_0.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_1.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_2.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: an ad in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/2000_ads_offset_noise_micro
<Gallery />
## Model description
### These are linoyts/2000_ads_offset_noise_micro LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`2000_ads_offset_noise_micro.safetensors` here 💾](/linoyts/2000_ads_offset_noise_micro/blob/main/2000_ads_offset_noise_micro.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:2000_ads_offset_noise_micro:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`2000_ads_offset_noise_micro_emb.safetensors` here 💾](/linoyts/2000_ads_offset_noise_micro/blob/main/2000_ads_offset_noise_micro_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `2000_ads_offset_noise_micro_emb` to your prompt. For example, `an ad in the style of 2000_ads_offset_noise_micro_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/2000_ads_offset_noise_micro', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/2000_ads_offset_noise_micro', filename='2000_ads_offset_noise_micro_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('<s0><s1> ad of a llama wearing headphones').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/2000_ads_offset_noise_micro/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Abdul-Ib/all-MiniLM-L6-v2-2024
|
Abdul-Ib
| 2024-01-24T21:08:10Z | 51 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-24T21:08:06Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 70047 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 140094,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
issafuad/mistral-7b-base
|
issafuad
| 2024-01-24T21:05:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"en",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T21:05:06Z |
---
license: apache-2.0
pipeline_tag: text-generation
language:
- en
tags:
- pretrained
inference:
parameters:
temperature: 0.7
---
# Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
## Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
## Troubleshooting
- If you see the following error:
```
KeyError: 'mistral'
```
- Or:
```
NotImplementedError: Cannot copy out of meta tensor; no data!
```
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
## Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
mabobe-biyong/m2m100_418M-fr-kak-rel-ft
|
mabobe-biyong
| 2024-01-24T21:01:54Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Kako",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:39:20Z |
---
license: afl-3.0
language:
- fr
tags:
- Cameroonian culture
- Kako
---
|
io-roboto/rl_course_vizdoom_health_gathering_supreme
|
io-roboto
| 2024-01-24T21:01:22Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T20:18:15Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.58 +/- 4.87
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r io-roboto/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .home.roboto.Software..miniconda.envs.HFRL.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .home.roboto.Software..miniconda.envs.HFRL.lib.python3.10.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
mabobe-biyong/m2m100_418M-fr-gba-rel-ft
|
mabobe-biyong
| 2024-01-24T20:57:18Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Gbaya (Nord-Ouest)",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:46:39Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Gbaya (Nord-Ouest)
---
|
tanatapanun/fine-tuned-BioBART-20-epochs-1500-input-256-output
|
tanatapanun
| 2024-01-24T20:55:04Z | 20 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T19:30:58Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-20-epochs-1500-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-20-epochs-1500-input-256-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9257
- Rouge1: 0.1655
- Rouge2: 0.0291
- Rougel: 0.1256
- Rougelsum: 0.1266
- Gen Len: 34.62
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.1052 | 0.0511 | 0.0 | 0.047 | 0.0474 | 22.48 |
| No log | 2.0 | 302 | 1.1483 | 0.077 | 0.0156 | 0.0673 | 0.0678 | 11.56 |
| No log | 3.0 | 453 | 0.9767 | 0.0744 | 0.0182 | 0.0537 | 0.0557 | 23.57 |
| 4.0217 | 4.0 | 604 | 0.9160 | 0.1355 | 0.033 | 0.1053 | 0.1042 | 37.77 |
| 4.0217 | 5.0 | 755 | 0.8850 | 0.1682 | 0.0352 | 0.1342 | 0.1342 | 41.92 |
| 4.0217 | 6.0 | 906 | 0.8736 | 0.1342 | 0.0308 | 0.1037 | 0.1037 | 35.34 |
| 0.761 | 7.0 | 1057 | 0.8582 | 0.144 | 0.0361 | 0.1082 | 0.1095 | 39.27 |
| 0.761 | 8.0 | 1208 | 0.8551 | 0.165 | 0.0392 | 0.1233 | 0.1254 | 39.55 |
| 0.761 | 9.0 | 1359 | 0.8623 | 0.141 | 0.0302 | 0.1169 | 0.1179 | 23.69 |
| 0.5257 | 10.0 | 1510 | 0.8642 | 0.1715 | 0.0436 | 0.1249 | 0.1267 | 45.78 |
| 0.5257 | 11.0 | 1661 | 0.8705 | 0.1702 | 0.0331 | 0.1386 | 0.1385 | 30.28 |
| 0.5257 | 12.0 | 1812 | 0.8761 | 0.169 | 0.035 | 0.1247 | 0.1254 | 42.74 |
| 0.5257 | 13.0 | 1963 | 0.8938 | 0.1719 | 0.0376 | 0.139 | 0.1389 | 29.73 |
| 0.368 | 14.0 | 2114 | 0.8907 | 0.1716 | 0.0402 | 0.1371 | 0.1377 | 36.07 |
| 0.368 | 15.0 | 2265 | 0.9027 | 0.1677 | 0.0324 | 0.1329 | 0.134 | 36.82 |
| 0.368 | 16.0 | 2416 | 0.9141 | 0.16 | 0.0322 | 0.1268 | 0.1281 | 32.87 |
| 0.2635 | 17.0 | 2567 | 0.9177 | 0.1702 | 0.0324 | 0.1312 | 0.1323 | 35.4 |
| 0.2635 | 18.0 | 2718 | 0.9194 | 0.1713 | 0.0333 | 0.1297 | 0.1312 | 37.75 |
| 0.2635 | 19.0 | 2869 | 0.9234 | 0.1693 | 0.0294 | 0.1293 | 0.1299 | 35.69 |
| 0.2141 | 20.0 | 3020 | 0.9257 | 0.1655 | 0.0291 | 0.1256 | 0.1266 | 34.62 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/roberta-large-mnli-model2
|
varun-v-rao
| 2024-01-24T20:53:29Z | 94 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T18:15:46Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-large-mnli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-mnli-model2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3560
- Accuracy: 0.9040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 84
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3066 | 1.0 | 6136 | 0.2844 | 0.8965 |
| 0.2086 | 2.0 | 12272 | 0.2929 | 0.9028 |
| 0.1257 | 3.0 | 18408 | 0.3560 | 0.9040 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
karawalla/shipv1model_20240124
|
karawalla
| 2024-01-24T20:53:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T20:53:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unity/sentis-MNIST-12
|
unity
| 2024-01-24T20:50:42Z | 4 | 1 |
unity-sentis
|
[
"unity-sentis",
"onnx",
"image-classification",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-10T03:51:43Z |
---
license: mit
library_name: unity-sentis
pipeline_tag: image-classification
---
## MNIST-12 Digit Recognition Model in Unity Sentis format
This is a digit recognition model from the [ONNX Model Zoo](https://github.com/onnx/models/tree/main/validated/vision/classification/mnist) formatted for Unity Sentis 2023
## How to Use
Example source code to run this model can be found at: [Source Code](https://github.com/Unity-Technologies/sentis-samples/tree/main/DigitRecognitionSample)
To use *.sentis precompiled file, place the file in the Assets/StreamingAssets folder. And replace the loading code with:
```
Model model = ModelLoader.Load(Application.streamingAssetsPath + "/MNIST-12.sentis");
```

## Unity Sentis
Unity Sentis is the inference engine that runs in Unity 3D. More information can be found at [here](https://unity.com/products/sentis)
|
mabobe-biyong/m2m100_418M-fr-med-rel-ft
|
mabobe-biyong
| 2024-01-24T20:44:29Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Medoumba",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:42:11Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Medoumba
---
|
CLMBR/binding-case-transformer-2
|
CLMBR
| 2024-01-24T20:42:53Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T20:26:45Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-case-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-case-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2236 | 0.03 | 76320 | 4.1964 |
| 4.0182 | 1.03 | 152640 | 4.0272 |
| 3.91 | 0.03 | 228960 | 3.9534 |
| 3.8414 | 1.03 | 305280 | 3.9119 |
| 3.7918 | 0.03 | 381600 | 3.8873 |
| 3.7502 | 0.03 | 457920 | 3.8712 |
| 3.7204 | 1.03 | 534240 | 3.8612 |
| 3.6869 | 0.03 | 610560 | 3.8549 |
| 3.6592 | 1.03 | 686880 | 3.8507 |
| 3.635 | 0.03 | 763200 | 3.8488 |
| 3.6105 | 1.03 | 839520 | 3.8465 |
| 3.5921 | 0.03 | 915840 | 3.8457 |
| 3.571 | 1.03 | 992160 | 3.8463 |
| 3.5534 | 0.03 | 1068480 | 3.8462 |
| 3.5396 | 1.03 | 1144800 | 3.8481 |
| 3.5192 | 0.03 | 1221120 | 3.8492 |
| 3.5044 | 1.03 | 1297440 | 3.8499 |
| 3.4927 | 0.03 | 1373760 | 3.8517 |
| 3.4786 | 1.03 | 1450080 | 3.8538 |
| 3.4707 | 0.03 | 1526400 | 3.8534 |
| 3.4625 | 1.03 | 1602720 | 3.8559 |
| 3.4549 | 0.03 | 1679040 | 3.8577 |
| 3.4471 | 1.03 | 1755360 | 3.8590 |
| 3.4355 | 0.03 | 1831680 | 3.8603 |
| 3.4219 | 1.03 | 1908000 | 3.8616 |
| 3.4109 | 0.03 | 1984320 | 3.8622 |
| 3.4003 | 1.03 | 2060640 | 3.8631 |
| 3.3907 | 0.03 | 2136960 | 3.8647 |
| 3.376 | 0.03 | 2213280 | 3.8675 |
| 3.3656 | 1.03 | 2289600 | 3.8681 |
| 3.3573 | 0.03 | 2365920 | 3.8685 |
| 3.341 | 1.03 | 2442240 | 3.8701 |
| 3.33 | 0.03 | 2518560 | 3.8698 |
| 3.3222 | 1.03 | 2594880 | 3.8702 |
| 3.3137 | 0.03 | 2671200 | 3.8703 |
| 3.3096 | 1.03 | 2747520 | 3.8693 |
| 3.3045 | 0.03 | 2823840 | 3.8692 |
| 3.2982 | 0.03 | 2900160 | 3.8688 |
| 3.2947 | 1.03 | 2976480 | 3.8677 |
| 3.2868 | 0.02 | 3052726 | 3.8665 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
neovalle/H4rmoniousAnthea
|
neovalle
| 2024-01-24T20:42:47Z | 548 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"ecology",
"sustainability",
"ecolinguistics",
"dpo",
"conversational",
"dataset:neovalle/H4rmony_dpo",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T20:11:50Z |
---
license: mit
datasets:
- neovalle/H4rmony_dpo
tags:
- ecology
- sustainability
- ecolinguistics
- dpo
---
# Model Details

# Model Description
This model is based on teknium/OpenHermes-2.5-Mistral-7B, DPO fine-tuned with the H4rmony_dpo dataset.
Its completions should be more ecologically aware than the base model.
Developed by: Jorge Vallego
Funded by : Neovalle Ltd.
Shared by : [email protected]
Model type: mistral
Language(s) (NLP): Primarily English
License: MIT
Finetuned from model: teknium/OpenHermes-2.5-Mistral-7B
Methodology: DPO
# Uses
Intended as PoC to show the effects of H4rmony_dpo dataset with DPO fine-tuning.
# Direct Use
For testing purposes to gain insight in order to help with the continous improvement of the H4rmony_dpo dataset.
# Downstream Use
Its direct use in applications is not recommended as this model is under testing for a specific task only (Ecological Alignment)
Out-of-Scope Use
Not meant to be used other than testing and evaluation of the H4rmony_dpo dataset and ecological alignment.
Bias, Risks, and Limitations
This model might produce biased completions already existing in the base model, and others unintentionally introduced during fine-tuning.
# How to Get Started with the Model
It can be loaded and run in a Colab instance with High RAM.
# Training Details
Trained using DPO
# Training Data
H4rmony Dataset - https://huggingface.co/datasets/neovalle/H4rmony_dpo
|
mabobe-biyong/m2m100_418M-fr-vut-rel-ft
|
mabobe-biyong
| 2024-01-24T20:20:32Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Vute (Est)",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:35:39Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Vute (Est)
---
|
mabobe-biyong/m2m100_418M-fr-ngi-rel-ft
|
mabobe-biyong
| 2024-01-24T20:11:50Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Ngiemboon",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:51:00Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Ngiemboon
---
|
dthomas84/musicgen-medium
|
dthomas84
| 2024-01-24T20:04:49Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"musicgen",
"text-to-audio",
"arxiv:2306.05284",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-09T15:34:30Z |
---
inference: true
tags:
- musicgen
license: cc-by-nc-4.0
pipeline_tag: text-to-audio
widget:
- text: a funky house with 80s hip hop vibes
example_title: Prompt 1
- text: a chill song with influences from lofi, chillstep and downtempo
example_title: Prompt 2
- text: a catchy beat for a podcast intro
example_title: Prompt 3
---
# MusicGen - Medium - 1.5B
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
Four checkpoints are released:
- [small](https://huggingface.co/facebook/musicgen-small)
- [**medium** (this checkpoint)](https://huggingface.co/facebook/musicgen-medium)
- [large](https://huggingface.co/facebook/musicgen-large)
- [melody](https://huggingface.co/facebook/musicgen-melody)
## Example
Try out MusicGen yourself!
* Audiocraft Colab:
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Colab:
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
* Hugging Face Demo:
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
</a>
## 🤗 Transformers Usage
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
```
pip install --upgrade pip
pip install --upgrade transformers scipy
```
2. Run inference via the `Text-to-Audio` (TTA) pipeline. You can infer the MusicGen model via the TTA pipeline in just a few lines of code!
```python
from transformers import pipeline
import scipy
synthesiser = pipeline("text-to-audio", "facebook/musicgen-medium")
music = synthesiser("lo-fi music with a soothing melody", forward_params={"do_sample": True})
scipy.io.wavfile.write("musicgen_out.wav", rate=music["sampling_rate"], data=music["audio"])
```
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 32 kHz audio waveform for more fine-grained control.
```python
from transformers import AutoProcessor, MusicgenForConditionalGeneration
processor = AutoProcessor.from_pretrained("facebook/musicgen-medium")
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-medium")
inputs = processor(
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
padding=True,
return_tensors="pt",
)
audio_values = model.generate(**inputs, max_new_tokens=256)
```
3. Listen to the audio samples either in an ipynb notebook:
```python
from IPython.display import Audio
sampling_rate = model.config.audio_encoder.sampling_rate
Audio(audio_values[0].numpy(), rate=sampling_rate)
```
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
```python
import scipy
sampling_rate = model.config.audio_encoder.sampling_rate
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
```
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
## Audiocraft Usage
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
```
pip install git+https://github.com/facebookresearch/audiocraft.git
```
2. Make sure to have [`ffmpeg`](https://ffmpeg.org/download.html) installed:
```
apt-get install ffmpeg
```
3. Run the following Python code:
```py
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained("medium")
model.set_generation_params(duration=8) # generate 8 seconds.
descriptions = ["happy rock", "energetic EDM"]
wav = model.generate(descriptions) # generates 2 samples.
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness")
```
## Model details
**Organization developing the model:** The FAIR team of Meta AI.
**Model date:** MusicGen was trained between April 2023 and May 2023.
**Model version:** This is the version 1 of the model.
**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284).
**Citation details:**
```
@misc{copet2023simple,
title={Simple and Controllable Music Generation},
author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
year={2023},
eprint={2306.05284},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
**License:** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
## Intended use
**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
**Out-of-scope use cases:** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
## Metrics
**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
- Overall quality of the music samples;
- Text relevance to the provided text input;
- Adherence to the melody for melody-guided music generation.
More details on performance measures and human studies can be found in the paper.
**Decision thresholds:** Not applicable.
## Evaluation datasets
The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
## Training datasets
The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
## Evaluation results
Below are the objective metrics obtained on MusicCaps with the released model. Note that for the publicly released models, we had all the datasets go through a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs), in order to keep only the instrumental part. This explains the difference in objective metrics with the models used in the paper.
| Model | Frechet Audio Distance | KLD | Text Consistency | Chroma Cosine Similarity |
|---|---|---|---|---|
| facebook/musicgen-small | 4.88 | 1.42 | 0.27 | - |
| **facebook/musicgen-medium** | 5.14 | 1.38 | 0.28 | - |
| facebook/musicgen-large | 5.48 | 1.37 | 0.28 | - |
| facebook/musicgen-melody | 4.93 | 1.41 | 0.27 | 0.44 |
More information can be found in the paper [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284), in the Results section.
## Limitations and biases
**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
**Limitations:**
- The model is not able to generate realistic vocals.
- The model has been trained with English descriptions and will not perform as well in other languages.
- The model does not perform equally well for all music styles and cultures.
- The model sometimes generates end of songs, collapsing to silence.
- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
|
mabobe-biyong/m2m100_418M-fr-bam-rel-ft
|
mabobe-biyong
| 2024-01-24T20:03:42Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Bambalang",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:43:15Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Bambalang
---
|
TitanTec/ppo-HealthGatheringSupreme-T1
|
TitanTec
| 2024-01-24T20:01:22Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T20:01:10Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.79 +/- 5.07
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r TitanTec/ppo-HealthGatheringSupreme-T1
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=ppo-HealthGatheringSupreme-T1
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=ppo-HealthGatheringSupreme-T1 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Nicows/ObsidianMerges
|
Nicows
| 2024-01-24T19:50:55Z | 0 | 8 | null |
[
"region:us"
] | null | 2023-08-02T21:00:33Z |
ObsidianModels
(for old models go to Old_Models branch)
Recommended model:
- ObsidianV3
Anime:
- ObsidianV3.3
Futanari model:
- NOOFO
Flat color:
- ObsidianV3-Flat
Semi-realistic/general model:
- ObsidianV3
|
jtatman/felladrin-tinymistral-248m-v4-dpo
|
jtatman
| 2024-01-24T19:46:20Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"DPO",
"reasoning",
"conversational",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T08:41:47Z |
---
library_name: transformers
tags:
- DPO
- reasoning
- mistral
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
pipeline_tag: text-generation
---
# Model Card for felladrin-tinymistral-248m-v4-dpo
SFT model trained with orca DPO
## Model Details
### Model Description
Experimental.
ChatML format.
|
Ahmed235/roberta-base-topic_classification_simple
|
Ahmed235
| 2024-01-24T19:43:24Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T18:34:11Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-topic_classification_simple
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-topic_classification_simple
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3253
- Accuracy: {'accuracy': 0.8445839874411303}
- F1: {'f1': 0.8435559601445874}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------:|:--------------------------:|
| No log | 1.0 | 353 | 0.6772 | {'accuracy': 0.7905359946176272} | {'f1': 0.7881026657042776} |
| 0.8304 | 2.0 | 706 | 0.6028 | {'accuracy': 0.8187934514465127} | {'f1': 0.8207294945978928} |
| 0.3839 | 3.0 | 1059 | 0.5942 | {'accuracy': 0.8344920385736713} | {'f1': 0.8333019225828988} |
| 0.3839 | 4.0 | 1412 | 0.6904 | {'accuracy': 0.8340435075128952} | {'f1': 0.8330992428789376} |
| 0.2015 | 5.0 | 1765 | 0.8314 | {'accuracy': 0.8264184794797039} | {'f1': 0.82429813311833} |
| 0.118 | 6.0 | 2118 | 0.8572 | {'accuracy': 0.8356133662256111} | {'f1': 0.8349736274018552} |
| 0.118 | 7.0 | 2471 | 0.9742 | {'accuracy': 0.8383045525902669} | {'f1': 0.8376600364979794} |
| 0.0804 | 8.0 | 2824 | 1.0628 | {'accuracy': 0.8333707109217313} | {'f1': 0.8313400577604307} |
| 0.0508 | 9.0 | 3177 | 1.0866 | {'accuracy': 0.8333707109217313} | {'f1': 0.832415418717587} |
| 0.0406 | 10.0 | 3530 | 1.1633 | {'accuracy': 0.8432383942588024} | {'f1': 0.8425868379595812} |
| 0.0406 | 11.0 | 3883 | 1.2132 | {'accuracy': 0.8400986768333707} | {'f1': 0.8388873470699977} |
| 0.0245 | 12.0 | 4236 | 1.2799 | {'accuracy': 0.836958959407939} | {'f1': 0.8378019487138132} |
| 0.0139 | 13.0 | 4589 | 1.2379 | {'accuracy': 0.8434626597891904} | {'f1': 0.8429633731503271} |
| 0.0139 | 14.0 | 4942 | 1.2578 | {'accuracy': 0.8445839874411303} | {'f1': 0.8439974594663667} |
| 0.014 | 15.0 | 5295 | 1.3392 | {'accuracy': 0.8407714734245346} | {'f1': 0.8405188286141088} |
| 0.0111 | 16.0 | 5648 | 1.2977 | {'accuracy': 0.8443597219107423} | {'f1': 0.8438293082262649} |
| 0.0099 | 17.0 | 6001 | 1.3405 | {'accuracy': 0.8412200044853106} | {'f1': 0.8400992068548403} |
| 0.0099 | 18.0 | 6354 | 1.3433 | {'accuracy': 0.8405472078941467} | {'f1': 0.839917724407298} |
| 0.0041 | 19.0 | 6707 | 1.3269 | {'accuracy': 0.8445839874411303} | {'f1': 0.8434224071770644} |
| 0.0041 | 20.0 | 7060 | 1.3253 | {'accuracy': 0.8445839874411303} | {'f1': 0.8435559601445874} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Changchoichang2104/siamesemodel_facerecognition
|
Changchoichang2104
| 2024-01-24T19:36:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T19:08:10Z |
---
license: apache-2.0
---
# Face Recognition Siamese Model
This project implements a face recognition system using a custom-trained Siamese Model with the MobileNet architecture. The model is trained on the Labeled Faces in the Wild (LFW) dataset.
## Table of Contents
- [Dataset](#dataset)
- [Model Training](#model-training)
- [Pretrained MobileNet](#pretrained-mobilenet)
- [Usage](#usage)
- [References](#references)
## Dataset
### Downloading the Dataset
The dataset for custom training can be downloaded from the LFW website:
- [Download Dataset](https://vis-www.cs.umass.edu/lfw/)
Please download the dataset in the 'All images as gzipped tar file' format. After downloading, extract the contents into the `data/negative` folder for retraining the model.
## Model Training
### Training Process
The training process is highly influenced by the following YouTube playlist:
- [Face Recognition Training Tutorial](https://www.youtube.com/watch?v=bK_k7eebGgc&list=PLgNJO2hghbmhHuhURAGbe6KWpiYZt0AMH)
## Pretrained MobileNet
The MobileNet architecture is used as the base model, with additional layers to fit the Siamese model structure. The MobileNet used is within the TensorFlow framework.
## Usage
Follow these steps to set up and use the face recognition system:
**Train Your Model**:
- Use the provided Jupyter notebook to train your model. Make sure you have downloaded and prepared the dataset as described in the [Dataset](#dataset) section.
## References
- Labeled Faces in the Wild (LFW): [Visit LFW Website](https://vis-www.cs.umass.edu/lfw/)
- Siamese Network Tutorial: [YouTube Playlist](https://www.youtube.com/watch?v=bK_k7eebGgc&list=PLgNJO2hghbmhHuhURAGbe6KWpiYZt0AMH)
## Have Questions or Feedback?
If you have any questions, concerns, or feedback about this project, we'd love to hear from you! Feel free to open an issue in this repository or contact the contributors directly. Your input is valuable to us and will help in improving this project.
|
C-Stuti/temp_model_outputdir
|
C-Stuti
| 2024-01-24T19:35:59Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T19:35:03Z |
---
license: mit
base_model: roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: temp_model_outputdir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_model_outputdir
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3571
- Precision: 0.9390
- Recall: 0.9355
- F1: 0.9315
- Accuracy: 0.9355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:-----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 1.9118 | 1.0 | 1511 | 0.8173 | 0.8042 | 0.7125 | 0.8320 | 0.8173 |
| 0.6271 | 2.0 | 3022 | 0.8402 | 0.8360 | 0.6493 | 0.8535 | 0.8402 |
| 0.5214 | 3.0 | 4533 | 0.8342 | 0.8285 | 0.7902 | 0.8391 | 0.8342 |
| 0.7385 | 4.0 | 6044 | 0.8769 | 0.8724 | 0.5748 | 0.8879 | 0.8769 |
| 0.6674 | 5.0 | 7555 | 0.8640 | 0.8602 | 0.5157 | 0.8802 | 0.8640 |
| 0.4279 | 6.0 | 9066 | 0.9077 | 0.9029 | 0.4802 | 0.9148 | 0.9077 |
| 0.5507 | 7.0 | 10577 | 0.3693 | 0.9371 | 0.9332 | 0.9288 | 0.9332 |
| 0.2703 | 8.0 | 12088 | 0.3571 | 0.9390 | 0.9355 | 0.9315 | 0.9355 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
ylacombe/w2v-bert-2.0-mongolian-colab-CV16.0
|
ylacombe
| 2024-01-24T19:33:20Z | 22 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:ylacombe/w2v-bert-2.0",
"base_model:finetune:ylacombe/w2v-bert-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-18T14:28:03Z |
---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-bert-2.0-mongolian-colab-CV16.0
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: mn
split: test
args: mn
metrics:
- name: Wer
type: wer
value: 0.3382089065303681
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-mongolian-colab-CV16.0
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5279
- Wer: 0.3382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0592 | 2.37 | 300 | 0.7052 | 0.5907 |
| 0.3893 | 4.74 | 600 | 0.5848 | 0.4508 |
| 0.1926 | 7.11 | 900 | 0.5277 | 0.3830 |
| 0.0857 | 9.49 | 1200 | 0.5279 | 0.3382 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bartowski/internlm2-math-7b-llama-exl2
|
bartowski
| 2024-01-24T19:28:43Z | 1 | 1 | null |
[
"math",
"text-generation",
"en",
"zh",
"license:other",
"region:us"
] |
text-generation
| 2024-01-24T17:02:25Z |
---
pipeline_tag: text-generation
license: other
language:
- en
- zh
tags:
- math
quantized_by: bartowski
---
#### Special thanks to <a href="https://huggingface.co/chargoddard">Charles Goddard</a> for the conversion script to create llama models from internlm
## Exllama v2 Quantizations of internlm2-math-7b-llama
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/internlm/internlm2-math-7b
| Branch | Bits | lm_head bits | Size | Description |
| ----- | ---- | ------- | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/internlm2-math-7b-llama-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/internlm2-math-7b-llama-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/internlm2-math-7b-llama-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/internlm2-math-7b-llama-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/internlm2-math-7b-llama-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. |
All VRAM requirements estimated from 16k context. For 32k context add ~2 GB.
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/internlm2-math-7b-llama-exl2 internlm2-math-7b-llama-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `internlm2-math-7b-llama-exl2`:
```shell
mkdir internlm2-math-7b-llama-exl2
huggingface-cli download bartowski/internlm2-math-7b-llama-exl2 --local-dir internlm2-math-7b-llama-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir internlm2-math-7b-llama-exl2-6_5
huggingface-cli download bartowski/internlm2-math-7b-llama-exl2 --revision 6_5 --local-dir internlm2-math-7b-llama-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir internlm2-math-7b-llama-exl2-6.5
huggingface-cli download bartowski/internlm2-math-7b-llama-exl2 --revision 6_5 --local-dir internlm2-math-7b-llama-exl2-6.5 --local-dir-use-symlinks False
```
|
jtatman/alpacaCoT-tinymistral-v2
|
jtatman
| 2024-01-24T19:21:24Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T21:08:00Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for alpacaCoT-tinymistral-v2
ChatML Format. Still in early development.
|
mabobe-biyong/m2m100_418M-fr-mus-rel-ft
|
mabobe-biyong
| 2024-01-24T19:20:33Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"Cameroonian culture",
"Musgu (Adamaoua)",
"fr",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-21T12:45:40Z |
---
license: afl-3.0
language:
- fr
library_name: transformers
tags:
- Cameroonian culture
- Musgu (Adamaoua)
---
|
Gourav12/mcccc
|
Gourav12
| 2024-01-24T19:19:51Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-24T19:19:51Z |
---
license: bigscience-bloom-rail-1.0
---
|
GerardMR/rl_course_vizdoom_health_gathering_supreme
|
GerardMR
| 2024-01-24T19:12:38Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T19:12:31Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 8.54 +/- 4.58
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r GerardMR/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
jtatman/tinymistral-v2-pycoder-instruct-248m
|
jtatman
| 2024-01-24T19:12:03Z | 101 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"dataset:jtatman/python-code-dataset-500k",
"dataset:jtatman/python-github-code-instruct-filtered-5k",
"dataset:jtatman/pile_python_instruct_format",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T11:33:50Z |
---
license: apache-2.0
datasets:
- jtatman/python-code-dataset-500k
- jtatman/python-github-code-instruct-filtered-5k
- jtatman/pile_python_instruct_format
library_name: transformers
tags:
- code
---
# Model Card for tinymistral-v2-pycoder-instruct-248m
This modelcard is for tinymistral-v2-pycoder-instruct, a python-specific code generation model on top of [Locutusque/TinyMistral-248M-v2-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct).
## Model Details
This instruct model follows the original in using ChatML format.
An empty prompt will return various information from the base model, but using the instruct format will deliver python code of varying quality.
### Model Description
Model is in active development, base model is in active development, and all should be treated with caution.
- **Developed by:** [Locutusque and M4ai]
- **Funded by:** [Lint from a corner pocket]
- **Shared by:** [jtatman](https://huggingface.co/jtatman)
- **Model type:** [MistralForCausalLM](Locutusque/TinyMistral-248M-v2)
- **License:** [MIT]
- **Finetuned from model [Locutusque/TinyMistral-248M-v2](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct)
## Uses
Generate python code.
### Direct Use
Probably could be fine tuned with a more comprehensive dataset. Experiments are in progress.
## How to Get Started with the Model
Use the prompt format below to get started with the model.
<|im_start|>user
Write a function for multiplying two numbers, from variables 'a' and 'b'.<|im_end|>
<|im_start|>assistant
## Training Details
### Training Data
Custom formatted existing python data from:
- [jtatman/python-code-dataset-500k](https://huggingface.co/datasets/jtatman/python-code-dataset-500k)
- [jtatman/python-github-code-instruct-filtered-5k](https://huggingface.co/datasets/jtatman/python-github-code-instruct-filtered-5k)
- [jtatman/pile_python_instruct_format](https://huggingface.co/datasets/jtatman/pile_python_instruct_format)
### Training Procedure
Repeat training depending on compute budget.
#### Preprocessing
Conversion to alpaca/instruct format.
#### Training Hyperparameters
- **Training regime:** fp16, merge of parameter fine-tune adapters when necessary and helpful.
## Evaluation
#### Metrics
Latest metrics:
- epoch: 4.87
- global_step: 220
- learning_rate: 0.00006713780918727916
- loss: 2.3736
|
CLMBR/npi-sim-ques-transformer-1
|
CLMBR
| 2024-01-24T19:11:53Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T15:08:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-sim-ques-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-sim-ques-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2375 | 0.03 | 76320 | 4.1971 |
| 4.0282 | 1.03 | 152640 | 4.0283 |
| 3.9236 | 0.03 | 228960 | 3.9527 |
| 3.8528 | 1.03 | 305280 | 3.9116 |
| 3.7989 | 0.03 | 381600 | 3.8860 |
| 3.7596 | 0.03 | 457920 | 3.8695 |
| 3.725 | 1.03 | 534240 | 3.8590 |
| 3.6914 | 0.03 | 610560 | 3.8515 |
| 3.663 | 1.03 | 686880 | 3.8472 |
| 3.6364 | 0.03 | 763200 | 3.8443 |
| 3.6133 | 1.03 | 839520 | 3.8424 |
| 3.5958 | 0.03 | 915840 | 3.8413 |
| 3.5717 | 1.03 | 992160 | 3.8403 |
| 3.5508 | 0.03 | 1068480 | 3.8411 |
| 3.5374 | 1.03 | 1144800 | 3.8404 |
| 3.5307 | 0.03 | 1221120 | 3.8424 |
| 3.5136 | 1.03 | 1297440 | 3.8437 |
| 3.5008 | 0.03 | 1373760 | 3.8459 |
| 3.4902 | 1.03 | 1450080 | 3.8467 |
| 3.4789 | 0.03 | 1526400 | 3.8487 |
| 3.469 | 1.03 | 1602720 | 3.8494 |
| 3.46 | 0.03 | 1679040 | 3.8510 |
| 3.4508 | 1.03 | 1755360 | 3.8526 |
| 3.437 | 0.03 | 1831680 | 3.8534 |
| 3.4233 | 1.03 | 1908000 | 3.8546 |
| 3.4119 | 0.03 | 1984320 | 3.8562 |
| 3.3993 | 1.03 | 2060640 | 3.8578 |
| 3.3929 | 0.03 | 2136960 | 3.8581 |
| 3.3765 | 1.03 | 2213280 | 3.8606 |
| 3.3611 | 0.03 | 2289600 | 3.8612 |
| 3.3543 | 1.03 | 2365920 | 3.8624 |
| 3.3543 | 0.03 | 2442240 | 3.8624 |
| 3.3435 | 1.03 | 2518560 | 3.8634 |
| 3.3332 | 0.03 | 2594880 | 3.8640 |
| 3.3239 | 1.03 | 2671200 | 3.8647 |
| 3.318 | 0.03 | 2747520 | 3.8650 |
| 3.3101 | 1.03 | 2823840 | 3.8644 |
| 3.3052 | 0.03 | 2900160 | 3.8643 |
| 3.3 | 1.03 | 2976480 | 3.8636 |
| 3.2909 | 0.02 | 3052726 | 3.8619 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
enrique2701/taxi
|
enrique2701
| 2024-01-24T19:05:51Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T19:05:42Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="enrique2701/taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
guirnd/q-Taxi-v3
|
guirnd
| 2024-01-24T19:01:13Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T19:01:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.26 +/- 2.59
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="guirnd/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/WestLake-7B-v2-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-24T18:57:45Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T18:51:54Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
|
poGlingus/Mixtral-8x7B-Instruct-v0.1
|
poGlingus
| 2024-01-24T18:52:24Z | 1 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-24T18:52:08Z |
---
library_name: peft
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
sumangpt/zephyr-finetuned
|
sumangpt
| 2024-01-24T18:51:21Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-01-24T09:10:28Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: zephyr-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-finetuned
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
lstyles/output
|
lstyles
| 2024-01-24T18:51:15Z | 174 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T15:30:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1507 | 1.0 | 597 | 0.0000 |
| 0.0013 | 2.0 | 1194 | 0.0000 |
| 0.0006 | 3.0 | 1791 | 0.0000 |
| 0.0004 | 4.0 | 2388 | 0.0000 |
| 0.0003 | 5.0 | 2985 | 0.0000 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jfmatos-isq/xlm-roberta-base-finetuned-panx-all
|
jfmatos-isq
| 2024-01-24T18:49:13Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-24T18:34:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1717
- F1: 0.8544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2993 | 1.0 | 835 | 0.1903 | 0.8089 |
| 0.1546 | 2.0 | 1670 | 0.1726 | 0.8428 |
| 0.1017 | 3.0 | 2505 | 0.1717 | 0.8544 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.10.3
|
Kjoekjoe/ai_test
|
Kjoekjoe
| 2024-01-24T18:48:33Z | 174 | 0 |
transformers
|
[
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-24T18:45:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/WestLake-7B-v2-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-24T18:47:32Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T18:43:31Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
|
am-infoweb/classification_test_24Jan
|
am-infoweb
| 2024-01-24T18:44:49Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"longformer",
"text-classification",
"generated_from_trainer",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T16:38:44Z |
---
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: classification_test_24Jan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classification_test_24Jan
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2570
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5747 | 1.0 | 1318 | 0.2918 | 0.9311 |
| 0.3317 | 2.0 | 2636 | 0.2456 | 0.9459 |
| 0.2215 | 3.0 | 3954 | 0.2050 | 0.9550 |
| 0.1525 | 4.0 | 5272 | 0.2775 | 0.9590 |
| 0.0977 | 5.0 | 6590 | 0.2771 | 0.9585 |
| 0.0899 | 6.0 | 7908 | 0.2663 | 0.9613 |
| 0.082 | 7.0 | 9226 | 0.2500 | 0.9636 |
| 0.0698 | 8.0 | 10544 | 0.2939 | 0.9607 |
| 0.0287 | 9.0 | 11862 | 0.2668 | 0.9653 |
| 0.0417 | 10.0 | 13180 | 0.2570 | 0.9687 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/WestLake-7B-v2-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-24T18:43:28Z | 11 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T18:41:45Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
|
LoneStriker/WestLake-7B-v2-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-24T18:41:43Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T18:40:05Z |
---
license: apache-2.0
language:
- en
library_name: transformers
---

**Update Notes:**
*Version 2 trained 1 additional epoch cycle for 3 total*
# Westlake-7Bv2: Role-Play & Text Generation Specialist Model
Welcome to the documentation of Westlake-7B, a cutting-edge language model designed for exceptional role-play and text generation tasks. This README file aims to provide an overview of our capabilities, usage guidelines, and potential applications.
## About Westlake-7Bv2
Westlake-7B is built upon a vast corpus of diverse texts, enabling it to generate contextually relevant responses in various scenarios. With its impressive size of 7 billion parameters, this model excels at understanding nuances in language and producing creative outputs.
### Key Features
1. **Role-Play**: Westlake-7Bv2 can seamlessly adapt to different character personas and engage in dynamic conversations while maintaining consistency throughout the interaction. It can generate believable dialogues across various genres, including fiction, non-fiction, historical events, or even fantasy worlds.
2. **Text Generation**: This model is proficient at generating original content such as stories, poems, essays, news articles, and more. Its ability to capture the essence of different writing styles makes it an ideal tool for creative writers seeking inspiration or assistance in their projects.
3. **Contextual Understanding**: Westlake-7B's extensive training allows it to comprehend complex contexts and generate responses that align with given situations. It can handle multiple topics simultaneously, making it versatile across various applications.
4. **Continuous Learning**: As a language model, Westlake-7B continuously improves its performance through ongoing training on new data sets. This ensures its capabilities remain up-to-date and relevant in an ever-evolving world of communication.
## Usage Guidelines
To utilize Westlake-7Bv2 for your projects or experiments, follow these steps:
1. **Prompting**: Provide clear and concise prompts that outline the desired role-play scenario or text generation task. The quality of output depends heavily on the clarity and relevance of input instructions.
2. **Feedback Loop**: For optimal results, consider incorporating a feedback loop into your application to refine generated outputs based on user preferences or additional contextual information. This iterative process can significantly enhance the model's performance in specific domains.
3. **Ethical Considerations**: As with any AI system, ensure responsible usage of Westlake-7B by avoiding harmful content generation or misuse of its capabilities.
## Potential Applications
Westlake-7Bv2's versatility makes it suitable for various applications across different industries:
1. **Creative Writing**: Assist authors in generating new ideas, expanding storylines, or even completing drafts by providing creative suggestions and textual content.
2. **Education**: Enhance language learning platforms with interactive role-play scenarios to improve students' communication skills and cultural understanding.
3. **Gaming**: Integrate Westlake-7B into game engines for dynamic non-player character interactions or generating unique questlines based on player choices.
4. **Customer Support**: Leverage the model's conversational abilities to create chatbots capable of handling complex queries and providing personalized assistance.
5. **Social Media**: Develop applications that generate engaging content such as captions, status updates, or even entire posts tailored to users' preferences and interests.
|
dtrifuno/wav2vec2-xls-r-300m-mk
|
dtrifuno
| 2024-01-24T18:33:11Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:fleurs",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-15T15:48:39Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- fleurs
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-300m-mk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: fleurs
type: fleurs
config: mk_mk
split: test
args: mk_mk
metrics:
- name: Wer
type: wer
value: 0.14327357528057136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-mk
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Macedonian
using the train and validation splits of the FLEURS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1589
- Wer: 0.1433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.609 | 2.33 | 400 | 0.3751 | 0.4184 |
| 0.232 | 4.65 | 800 | 0.1694 | 0.1960 |
| 0.0773 | 6.98 | 1200 | 0.1630 | 0.1598 |
| 0.0407 | 9.3 | 1600 | 0.1589 | 0.1433 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
HarshithNLP/bloom_in_out
|
HarshithNLP
| 2024-01-24T18:31:47Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-3b",
"base_model:adapter:bigscience/bloom-3b",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-24T18:07:02Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-3b
model-index:
- name: bloom_in_out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_in_out
This model is a fine-tuned version of [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
chasedreaminf/my_awesome_mind_model
|
chasedreaminf
| 2024-01-24T18:29:12Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:minds14",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-24T18:17:28Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- minds14
metrics:
- accuracy
model-index:
- name: my_awesome_mind_model
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: minds14
type: minds14
config: en-US
split: train
args: en-US
metrics:
- name: Accuracy
type: accuracy
value: 0.04424778761061947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_mind_model
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6543
- Accuracy: 0.0442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 2.6429 | 0.0708 |
| No log | 1.87 | 7 | 2.6435 | 0.0708 |
| 2.6326 | 2.93 | 11 | 2.6471 | 0.0619 |
| 2.6326 | 4.0 | 15 | 2.6514 | 0.0442 |
| 2.6326 | 4.8 | 18 | 2.6550 | 0.0531 |
| 2.6164 | 5.87 | 22 | 2.6521 | 0.0531 |
| 2.6164 | 6.93 | 26 | 2.6541 | 0.0531 |
| 2.6112 | 8.0 | 30 | 2.6543 | 0.0442 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
issafuad/safesign1
|
issafuad
| 2024-01-24T18:25:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"fr",
"it",
"de",
"es",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-24T18:25:09Z |
---
license: apache-2.0
language:
- fr
- it
- de
- es
- en
inference: false
---
# Model Card for Mixtral-8x7B
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/).
## Warning
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.
## Instruction format
This format must be strictly respected, otherwise the model will generate sub-optimal outputs.
The template used to build a prompt for the Instruct model is defined as follows:
```
<s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST]
```
Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings.
As reference, here is the pseudo-code used to tokenize instructions during fine-tuning:
```python
def tokenize(text):
return tok.encode(text, add_special_tokens=False)
[BOS_ID] +
tokenize("[INST]") + tokenize(USER_MESSAGE_1) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_1) + [EOS_ID] +
…
tokenize("[INST]") + tokenize(USER_MESSAGE_N) + tokenize("[/INST]") +
tokenize(BOT_MESSAGE_N) + [EOS_ID]
```
In the pseudo-code above, note that the `tokenize` method should not add a BOS or EOS token automatically, but should add a prefix space.
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem:
### In half-precision
Note `float16` precision only works on GPU devices
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16).to(0)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Lower precision using (8-bit & 4-bit) using `bitsandbytes`
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
### Load the model with Flash Attention 2
<details>
<summary> Click to expand </summary>
```diff
+ import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
+ model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True)
text = "Hello my name is"
+ inputs = tokenizer(text, return_tensors="pt").to(0)
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
</details>
## Limitations
The Mixtral-8x7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
# The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
tawfikgh/T5-CNN-Daily-Mail-30000
|
tawfikgh
| 2024-01-24T18:19:19Z | 50 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:tawfikgh/T5-CNN-Daily-Mail",
"base_model:finetune:tawfikgh/T5-CNN-Daily-Mail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-19T11:19:52Z |
---
license: apache-2.0
base_model: tawfikgh/T5-CNN-Daily-Mail
tags:
- generated_from_keras_callback
model-index:
- name: tawfikgh/T5-CNN-Daily-Mail-30000
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tawfikgh/T5-CNN-Daily-Mail-30000
This model is a fine-tuned version of [tawfikgh/T5-CNN-Daily-Mail](https://huggingface.co/tawfikgh/T5-CNN-Daily-Mail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9838
- Train Accuracy: 0.4388
- Validation Loss: 1.7669
- Validation Accuracy: 0.4634
- Train Rouge1: 23.0643
- Train Rouge2: 9.2989
- Train Rougel: 18.6586
- Train Rougelsum: 21.4398
- Train F1: 0.9629
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train F1 | Train Gen Len | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:------------:|:------------:|:---------------:|:--------:|:-------------:|:-----:|
| 1.9838 | 0.4388 | 1.7669 | 0.4634 | 23.0643 | 9.2989 | 18.6586 | 21.4398 | 0.9629 | 19.0 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
web2savar/w2v-fine-tune-test-no-punct4
|
web2savar
| 2024-01-24T18:06:09Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:ylacombe/w2v-bert-2.0",
"base_model:finetune:ylacombe/w2v-bert-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-24T17:03:41Z |
---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-fine-tune-test-no-punct4
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 0.888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-fine-tune-test-no-punct4
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0213
- Wer: 0.888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-----:|
| 5.5721 | 1.54 | 20 | 3.5316 | 1.0 |
| 3.3513 | 3.08 | 40 | 3.3113 | 1.0 |
| 2.9765 | 4.62 | 60 | 3.1604 | 1.016 |
| 2.0468 | 6.15 | 80 | 2.5162 | 1.02 |
| 0.8977 | 7.69 | 100 | 1.4944 | 1.008 |
| 0.3831 | 9.23 | 120 | 1.0213 | 0.888 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
flavioegoncalves/classificator
|
flavioegoncalves
| 2024-01-24T18:03:45Z | 146 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-24T17:56:20Z |
---
base_model: facebook/wav2vec2
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: facebook/wav2vec2-base-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook/wav2vec2-base-finetuned-gtzan
This model is a fine-tuned version of [facebook/wav2vec2](https://huggingface.co/facebook/wav2vec2) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0140
- Accuracy: 0.4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.96 | 6 | 2.2871 | 0.195 |
| 2.2933 | 1.92 | 12 | 2.2658 | 0.18 |
| 2.2933 | 2.88 | 18 | 2.2214 | 0.275 |
| 2.2388 | 4.0 | 25 | 2.1885 | 0.29 |
| 2.1455 | 4.96 | 31 | 2.1246 | 0.38 |
| 2.1455 | 5.92 | 37 | 2.1139 | 0.35 |
| 2.0823 | 6.88 | 43 | 2.0462 | 0.36 |
| 2.0279 | 8.0 | 50 | 2.0282 | 0.405 |
| 2.0279 | 8.96 | 56 | 2.0133 | 0.405 |
| 1.9928 | 9.6 | 60 | 2.0140 | 0.4 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bcse/tsukasa-120b-qlora-GGUF
|
bcse
| 2024-01-24T18:03:35Z | 7 | 0 | null |
[
"gguf",
"en",
"dataset:PygmalionAI/PIPPA",
"dataset:lemonilia/LimaRP",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T16:40:44Z |
---
license: llama2
language:
- en
datasets:
- PygmalionAI/PIPPA
- lemonilia/LimaRP
---
# tsukasa-120b-qlora - GGUF
- Original model: [tsukasa-120b-qlora](https://huggingface.co/ludis/tsukasa-120b-qlora)
|
golesheed/whisper-large-v2-fa
|
golesheed
| 2024-01-24T18:00:52Z | 77 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"fa",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T15:40:39Z |
---
language:
- fa
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large Fa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Fa
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2511
- Wer: 52.3497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1999 | 0.43 | 1000 | 0.3631 | 55.5243 |
| 0.1391 | 0.86 | 2000 | 0.2965 | 47.4574 |
| 0.0719 | 1.29 | 3000 | 0.2725 | 54.5863 |
| 0.0611 | 1.72 | 4000 | 0.2511 | 52.3497 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
Techbro/Rahina
|
Techbro
| 2024-01-24T17:59:33Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2024-01-24T17:59:30Z |
---
license: bigscience-openrail-m
---
|
stablediffusionapi/reallife_v60
|
stablediffusionapi
| 2024-01-24T17:47:37Z | 28 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-24T17:46:07Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference
%20photography%20of%20a%20beautiful,%20(exhausted_1.2),%20(sweaty_1.1),%20(shor.jpeg)
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "reallife_v60"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/reallife_v60)
Model link: [View model](https://modelslab.com/models/reallife_v60)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "reallife_v60",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
TeeZee/Kyllene-57B-v1.0-bpw3.0-h6-exl2
|
TeeZee
| 2024-01-24T17:42:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T18:53:48Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
tags:
- merge
---
## **Kayllene 57B v1.0**
[exllamav2](https://github.com/turboderp/exllamav2) quant for [TeeZee/Kyllene-57B-v1.0](https://huggingface.co/TeeZee/Kyllene-57B-v1.0)
Runs smoothly on single 3090 in webui with context length set to 4096, ExLlamav2_HF loader
and cache_8bit=True
All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
|
hojzas/setfit-proj8
|
hojzas
| 2024-01-24T17:40:28Z | 47 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:hojzas/proj8-label2",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"co2_eq_emissions",
"region:us"
] |
text-classification
| 2024-01-24T17:40:07Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- hojzas/proj8-label2
metrics:
- accuracy
widget:
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys_used = {}\n for
item in iterable:\n rp = repr(key(item))\n if rp not in keys_used.keys():\n keys_used[rp]
= repr(item)\n yield item'
- text: 'def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for
i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))'
- text: 'def first_with_given_key(iterable, key=repr):\n set_of_keys = set()\n lambda_key
= (lambda x: key(x))\n for item in iterable:\n key = lambda_key(item)\n try:\n key_for_set
= hash(key)\n except TypeError:\n key_for_set = repr(key)\n if
key_for_set in set_of_keys:\n continue\n set_of_keys.add(key_for_set)\n yield
item'
- text: 'def first_with_given_key(iterable, key = lambda x: x):\n found_keys={}\n for
i in iterable:\n if key(i) not in found_keys.keys():\n found_keys[key(i)]=i\n yield
i'
- text: 'def first_with_given_key(the_iterable, key=lambda x: x):\n temp_keys=[]\n for
i in range(len(the_iterable)):\n if (key(the_iterable[i]) not in temp_keys):\n temp_keys.append(key(the_iterable[i]))\n yield
the_iterable[i]\n del temp_keys'
pipeline_tag: text-classification
inference: true
co2_eq_emissions:
emissions: 0.2520929621561019
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
ram_total_size: 251.49160385131836
hours_used: 0.005
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [hojzas/proj8-label2](https://huggingface.co/datasets/hojzas/proj8-label2) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [hojzas/proj8-label2](https://huggingface.co/datasets/hojzas/proj8-label2)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'def first_with_given_key(iterable, key=lambda x: x):\\n keys_in_list = []\\n for it in iterable:\\n if key(it) not in keys_in_list:\\n keys_in_list.append(key(it))\\n yield it'</li><li>'def first_with_given_key(iterable, key=lambda value: value):\\n it = iter(iterable)\\n saved_keys = []\\n while True:\\n try:\\n value = next(it)\\n if key(value) not in saved_keys:\\n saved_keys.append(key(value))\\n yield value\\n except StopIteration:\\n break'</li><li>'def first_with_given_key(iterable, key=None):\\n if key is None:\\n key = lambda x: x\\n item_list = []\\n key_set = set()\\n for item in iterable:\\n generated_item = key(item)\\n if generated_item not in item_list:\\n item_list.append(generated_item)\\n yield item'</li></ul> |
| 1 | <ul><li>'def first_with_given_key(lst, key = lambda x: x):\\n res = set()\\n for i in lst:\\n if repr(key(i)) not in res:\\n res.add(repr(key(i)))\\n yield i'</li><li>'def first_with_given_key(iterable, key=repr):\\n set_of_keys = set()\\n lambda_key = (lambda x: key(x))\\n for item in iterable:\\n key = lambda_key(item)\\n try:\\n key_for_set = hash(key)\\n except TypeError:\\n key_for_set = repr(key)\\n if key_for_set in set_of_keys:\\n continue\\n set_of_keys.add(key_for_set)\\n yield item'</li><li>'def first_with_given_key(iterable, key=None):\\n if key is None:\\n key = identity\\n appeared_keys = set()\\n for item in iterable:\\n generated_key = key(item)\\n if not generated_key.__hash__:\\n generated_key = repr(generated_key)\\n if generated_key not in appeared_keys:\\n appeared_keys.add(generated_key)\\n yield item'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/setfit-proj8")
# Run inference
preds = model("def first_with_given_key(iterable, key=lambda x: x):\n keys=[]\n for i in iterable:\n if key(i) not in keys:\n yield i\n keys.append(key(i))")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 43 | 90.28 | 119 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 20 |
| 1 | 5 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0159 | 1 | 0.3158 | - |
| 0.7937 | 50 | 0.0022 | - |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.005 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: No GPU used
- **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- **RAM Size**: 251.49 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Asahina2K/myKlaudiaXL
|
Asahina2K
| 2024-01-24T17:38:56Z | 2 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:Linaqruf/animagine-xl-2.0",
"base_model:adapter:Linaqruf/animagine-xl-2.0",
"license:other",
"region:us"
] |
text-to-image
| 2024-01-21T11:51:46Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
1girl, klaudia, green eyes, long hair, blonde hair,braid, solo,green eyes,
looking at viewer, breasts, sweat, shirt, white scrunchie, scrunchie,
sitting, white shirt, sleeveless shirt, white skirt, medium breasts, closed
mouth, bangs, bare shoulders, indoors, window, dutch angle, pleated
skirt,(masterpiece), (best quality), (ultra-detailed), illustration, perfect
composition, intricate details, moist skin, intricate details, HDR,
bokeh,<lora:my_klaudiaXL:0.8>
parameters:
negative_prompt: >-
nsfw, longbody, lowres, bad anatomy, bad hands, missing fingers, pubic
hair, extra digit, fewer digits, cropped, worst quality, low quality,
output:
url: images/00108-animagine-xl-3.0_4094758187.png
- text: >-
1girl, klaudia, green eyes, long hair, blonde hair,braid, solo,green eyes,
cowboy shot, looking at viewer, beach, playing instrument,
flute,(masterpiece), (best quality), (ultra-detailed), illustration, perfect
composition, intricate details, moist skin, intricate details, HDR,
bokeh,<lora:my_klaudiaXL:0.8>
parameters:
negative_prompt: >-
nsfw, longbody, lowres, bad anatomy, bad hands, missing fingers, pubic
hair, extra digit, fewer digits, cropped, worst quality, low quality,
output:
url: images/00063-animagine-xl-3.0_4009275284.png
- text: >-
1girl, klaudia, green eyes, long hair, blonde hair,braid, solo, green eyes,
looking at viewer, outstretched arms, outdoors, dress,blush, sleeveless,
sunset, armpits, open mouth, cleavage, smile, medium breasts, black dress,
building, bare shoulders, pink flower, hair intakes, bench, sleeveless
dress, bare arms, water,(masterpiece), (best quality), (ultra-detailed),
illustration, perfect composition, intricate details, moist skin, intricate
details, HDR, bokeh,<lora:my_klaudiaXL:0.8>
parameters:
negative_prompt: >-
nsfw, longbody, lowres, bad anatomy, bad hands, missing fingers, pubic
hair, extra digit, fewer digits, cropped, worst quality, low quality,
output:
url: images/00132-animagine-xl-3.0_2820064731.png
base_model: Linaqruf/animagine-xl-2.0
instance_prompt: klaudia
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
# Klaudia Valentz [Klaudia] LoRAXL
<Gallery />
## Model description
From Atelier Ryza, Klaudia is now available for SDXL model
Trained using the [AnimagineXL 3.0](https://huggingface.co/cagliostrolab/animagine-xl-3.0) model
License [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) with add some limitation for this model [myKlaudiaXL](https://huggingface.co/Asahina2K/myKlaudiaXL) shall not be used for any kind of commercial use, including but not limited to selling this model or merges of this model, selling image generated by this model or merges of this model, or 'paid member exclusive' for monetization platforms like Patreon/Fanbox/etc.
This LoRA available at:
[Civitai](https://civitai.com/models/272492/atelier-ryza-or-klaudia-valentz-klaudia-sdxl-lora) & [Tensor.art](https://tensor.art/models/685339715421010336)
## Trigger words
You should use `klaudia` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Asahina2K/myKlaudiaXL/tree/main) them in the Files & versions tab.
|
SeanJIE250/llama2_chatbot_law
|
SeanJIE250
| 2024-01-24T17:37:06Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"en",
"zh",
"dataset:SeanJIE250/llama2_law",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T19:25:45Z |
---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: other
datasets:
- SeanJIE250/llama2_law
language:
- en
- zh
---
# Contact Information
Email:[email protected]
# English INTRO
Give me some red stars ♥️ if u like this model! It's the model focused on Law field, honestly,doing bad as a daily chatbot however,start to know Mandarin and can handle the case study in details.
# Mandarin INTRO
老玩家点点红星♥️!中文法律对话机器人,具体案件审理较为不错。
# Usage
First at first , implementing this command needs transformer library ,you can do the download directly.Hope u well!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "SeanJIE250/chatbot_LAW"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "杀了人在中国判多少年?"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
outputs = model.generate(input_ids.to('cuda'),max_new_tokens=200)//you can adjust the max_new_tokens as you want.
response = tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=False)
print(response)
messages = [
{"role": "user", "content": "How to split the property if I divorced with my handsband?"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
outputs = model.generate(input_ids.to('cuda'),max_new_tokens=200)//you can adjust the max_new_tokens as you want.
response = tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=False)
print(response)
```
|
dlibf/zephyr-7b-sft-full_epoch3
|
dlibf
| 2024-01-24T17:35:59Z | 12 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/ultrachat_200k",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T07:22:49Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrachat_200k
model-index:
- name: zephyr-7b-sft-full_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-full_epoch3
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/ultrachat_200k dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9689 | 1.0 | 1089 | 3.9833 |
| 3.1562 | 2.0 | 2179 | 3.1740 |
| 2.99 | 3.0 | 3267 | 3.0449 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
poGlingus/Mistral-7B-Instruct-v0.1
|
poGlingus
| 2024-01-24T17:35:51Z | 1 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-24T17:29:05Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
Made with a custom data set from:
https://hyperspace.computer/varun/ethereum
VIA
GPT-4 powered Automation.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Nicolas W Schlaepfer]
- **Funded by [optional]:** [Hyperspace AI]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hojzas/setfit-tutorial
|
hojzas
| 2024-01-24T17:33:24Z | 47 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"co2_eq_emissions",
"region:us"
] |
text-classification
| 2024-01-24T17:33:02Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'has the chops of a smart-aleck film school brat and the imagination of a
big kid ... '
- text: 'that ''s truly deserving of its oscar nomination '
- text: 'instead gets ( sci-fi ) rehash '
- text: 'career-defining revelation '
- text: 'is ultimately about as inspiring as a hallmark card . '
pipeline_tag: text-classification
inference: true
co2_eq_emissions:
emissions: 0.05983916547782622
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
ram_total_size: 251.49160385131836
hours_used: 0.001
base_model: sentence-transformers/paraphrase-mpnet-base-v2
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'stale and uninspired . '</li><li>"the film 's considered approach to its subject matter is too calm and thoughtful for agitprop , and the thinness of its characterizations makes it a failure as straight drama . ' "</li><li>"that their charm does n't do a load of good "</li></ul> |
| 1 | <ul><li>"broomfield is energized by volletta wallace 's maternal fury , her fearlessness "</li><li>'flawless '</li><li>'insightfully written , delicately performed '</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/setfit-tutorial")
# Run inference
preds = model("career-defining revelation ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 11.4375 | 33 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2176 | - |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.000 kg of CO2
- **Hours Used**: 0.001 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: No GPU used
- **CPU Model**: Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
- **RAM Size**: 251.49 GB
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.36.1
- PyTorch: 2.1.2+cu121
- Datasets: 2.14.7
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
LongSafari/hyenadna-tiny-1k-seqlen-hf
|
LongSafari
| 2024-01-24T17:23:02Z | 20,383 | 1 |
transformers
|
[
"transformers",
"safetensors",
"hyenadna",
"text-generation",
"dna",
"biology",
"genomics",
"hyena",
"custom_code",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-03T17:30:24Z |
---
license: bsd-3-clause
tags:
- dna
- biology
- genomics
- hyena
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
We've uploaded a [collection](https://huggingface.co/collections/LongSafari/hyenadna-models-654d0cbbe113b04ba5a0f638) of all the pretrained HyenaDNA checkpoints.
You'll see models of different sizes and sequence lengths. There are also original weights-only versions of each model in the [LongSafari organization](https://huggingface.co/LongSafari), which are designed to be loaded with the original [github](https://github.com/HazyResearch/hyena-dna) repo. These models have identical outputs to the models in the collection above, just different interfaces.
See [GPU requirements](#hardware) for each model.
### Using HyenaDNA
In this brief code sample we demonstrate fine-tuning HyenaDNA on a sequence classification task. This sample uses the `medium` checkpoint, with a maximum sequence length of 160k nucleotides. Note that training will fail if you use a sequence length longer than the maximum supported length for your chosen checkpoint.
In testing, we have been able to train at a sequence length up to about 250k nucleotides on a Colab T4 GPU (16GB VRAM). For longer sequence lengths, more memory will be required.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import TrainingArguments, Trainer, logging
import torch
# instantiate pretrained model
checkpoint = 'LongSafari/hyenadna-medium-160k-seqlen-hf'
max_length = 160_000
# bfloat16 for better speed and reduced memory usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
# Generate some random sequence and labels
# If you're copying this code, replace the sequences and labels
# here with your own data!
sequence = 'ACTG' * int(max_length/4)
sequence = [sequence] * 8 # Create 8 identical samples
tokenized = tokenizer(sequence)["input_ids"]
labels = [0, 1] * 4
# Create a dataset for training
ds = Dataset.from_dict({"input_ids": tokenized, "labels": labels})
ds.set_format("pt")
# Initialize Trainer
# Note that we're using extremely small batch sizes to maximize
# our ability to fit long sequences in memory!
args = {
"output_dir": "tmp",
"num_train_epochs": 1,
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 4,
"gradient_checkpointing": True,
"learning_rate": 2e-5,
}
training_args = TrainingArguments(**args)
trainer = Trainer(model=model, args=training_args, train_dataset=ds)
result = trainer.train()
print(result)
# Now we can save_pretrained() or push_to_hub() to share the trained model!
```
You may also find these [notebooks](https://huggingface.co/docs/transformers/notebooks) useful. Although they're not specific to HyenaDNA, they contain additional examples of training DNA and sequence classification models.
- [How to fine-tune a Nucleotide Transformer model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb)
- [How to fine-tune a model on text classification](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40GB, A100-40GB, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80GB, A100-80GB, A100-40GB)
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, [email protected]
Michael Poli, [email protected]
Marjan Faizi, [email protected]
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
LongSafari/hyenadna-tiny-1k-seqlen-d256-hf
|
LongSafari
| 2024-01-24T17:22:45Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hyenadna",
"text-generation",
"dna",
"biology",
"genomics",
"hyena",
"custom_code",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-03T14:11:43Z |
---
license: bsd-3-clause
tags:
- dna
- biology
- genomics
- hyena
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
We've uploaded a [collection](https://huggingface.co/collections/LongSafari/hyenadna-models-654d0cbbe113b04ba5a0f638) of all the pretrained HyenaDNA checkpoints.
You'll see models of different sizes and sequence lengths. There are also original weights-only versions of each model in the [LongSafari organization](https://huggingface.co/LongSafari), which are designed to be loaded with the original [github](https://github.com/HazyResearch/hyena-dna) repo. These models have identical outputs to the models in the collection above, just different interfaces.
See [GPU requirements](#hardware) for each model.
### Using HyenaDNA
In this brief code sample we demonstrate fine-tuning HyenaDNA on a sequence classification task. This sample uses the `medium` checkpoint, with a maximum sequence length of 160k nucleotides. Note that training will fail if you use a sequence length longer than the maximum supported length for your chosen checkpoint.
In testing, we have been able to train at a sequence length up to about 250k nucleotides on a Colab T4 GPU (16GB VRAM). For longer sequence lengths, more memory will be required.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import TrainingArguments, Trainer, logging
import torch
# instantiate pretrained model
checkpoint = 'LongSafari/hyenadna-medium-160k-seqlen-hf'
max_length = 160_000
# bfloat16 for better speed and reduced memory usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
# Generate some random sequence and labels
# If you're copying this code, replace the sequences and labels
# here with your own data!
sequence = 'ACTG' * int(max_length/4)
sequence = [sequence] * 8 # Create 8 identical samples
tokenized = tokenizer(sequence)["input_ids"]
labels = [0, 1] * 4
# Create a dataset for training
ds = Dataset.from_dict({"input_ids": tokenized, "labels": labels})
ds.set_format("pt")
# Initialize Trainer
# Note that we're using extremely small batch sizes to maximize
# our ability to fit long sequences in memory!
args = {
"output_dir": "tmp",
"num_train_epochs": 1,
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 4,
"gradient_checkpointing": True,
"learning_rate": 2e-5,
}
training_args = TrainingArguments(**args)
trainer = Trainer(model=model, args=training_args, train_dataset=ds)
result = trainer.train()
print(result)
# Now we can save_pretrained() or push_to_hub() to share the trained model!
```
You may also find these [notebooks](https://huggingface.co/docs/transformers/notebooks) useful. Although they're not specific to HyenaDNA, they contain additional examples of training DNA and sequence classification models.
- [How to fine-tune a Nucleotide Transformer model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb)
- [How to fine-tune a model on text classification](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40GB, A100-40GB, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80GB, A100-80GB, A100-40GB)
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, [email protected]
Michael Poli, [email protected]
Marjan Faizi, [email protected]
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CyberPeace-Institute/Cybersecurity-Knowledge-Graph
|
CyberPeace-Institute
| 2024-01-24T17:22:03Z | 294 | 19 |
transformers
|
[
"transformers",
"pytorch",
"token-classification",
"legal",
"custom_code",
"en",
"arxiv:2204.02685",
"license:mit",
"autotrain_compatible",
"region:us"
] |
token-classification
| 2023-09-19T13:22:24Z |
---
language:
- en
pipeline_tag: token-classification
tags:
- legal
license: mit
---
# Knowledge Graph Extraction for Cyber incidents
This model has been finetuned with SecureBERT (https://arxiv.org/abs/2204.02685)
on the CASIE dataset (https://ieeexplore.ieee.org/document/9776031). We have implemented the approach described in the CASIE paper.
# Model Description
The following description is taken from the CASIE paper:
- An **event nugget** is a word or phrase that most clearly expresses the event occurrence. These differ from event triggers in that they can be multi-word phrases.
- An **event argument** is an event participant or property value. They can be taggable entities involved in the event, such as person or organization, or attributes that specify important information, such as time or amount.
- A **role** is a semantic relation between an event nugget and an argument. Each event type specifies the roles it can have and constraints on the arguments that can fill them.
- A **realis** value specifies whether or not an event occurred and can be one of the three values: Actual (event actually happened), Other (failed event, future event), or Generic (an undetermined/non-specific event, such as referring to the concept of phishing attacks).
# Example

# Usage
```
from transformers import AutoModelForTokenClassification
model = AutoModelForTokenClassification.from_pretrained("CyberPeace-Institute/Cybersecurity-Knowledge-Graph", trust_remote_code=True)
input_text = "This is a Cybersecurity-related text."
output = model(input_text)
```
IMPORTANT! : To get the Argument to Role coreferences, use the dedicated **space**! You can download the models under "arg_role_models/".
|
LongSafari/hyenadna-medium-160k-seqlen-hf
|
LongSafari
| 2024-01-24T17:20:02Z | 2,055 | 2 |
transformers
|
[
"transformers",
"safetensors",
"hyenadna",
"text-generation",
"dna",
"biology",
"genomics",
"hyena",
"custom_code",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-11-03T14:07:05Z |
---
license: bsd-3-clause
tags:
- dna
- biology
- genomics
- hyena
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/abs/2306.15794)
- [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna)
- [colab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
- [github](https://github.com/HazyResearch/hyena-dna)
**Links to all HuggingFace models:**
We've uploaded a [collection](https://huggingface.co/collections/LongSafari/hyenadna-models-654d0cbbe113b04ba5a0f638) of all the pretrained HyenaDNA checkpoints.
You'll see models of different sizes and sequence lengths. There are also original weights-only versions of each model in the [LongSafari organization](https://huggingface.co/LongSafari), which are designed to be loaded with the original [github](https://github.com/HazyResearch/hyena-dna) repo. These models have identical outputs to the models in the collection above, just different interfaces.
See [GPU requirements](#hardware) for each model.
### Using HyenaDNA
In this brief code sample we demonstrate fine-tuning HyenaDNA on a sequence classification task. This sample uses the `medium` checkpoint, with a maximum sequence length of 160k nucleotides. Note that training will fail if you use a sequence length longer than the maximum supported length for your chosen checkpoint.
In testing, we have been able to train at a sequence length up to about 250k nucleotides on a Colab T4 GPU (16GB VRAM). For longer sequence lengths, more memory will be required.
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers import TrainingArguments, Trainer, logging
import torch
# instantiate pretrained model
checkpoint = 'LongSafari/hyenadna-medium-160k-seqlen-hf'
max_length = 160_000
# bfloat16 for better speed and reduced memory usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(checkpoint, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
# Generate some random sequence and labels
# If you're copying this code, replace the sequences and labels
# here with your own data!
sequence = 'ACTG' * int(max_length/4)
sequence = [sequence] * 8 # Create 8 identical samples
tokenized = tokenizer(sequence)["input_ids"]
labels = [0, 1] * 4
# Create a dataset for training
ds = Dataset.from_dict({"input_ids": tokenized, "labels": labels})
ds.set_format("pt")
# Initialize Trainer
# Note that we're using extremely small batch sizes to maximize
# our ability to fit long sequences in memory!
args = {
"output_dir": "tmp",
"num_train_epochs": 1,
"per_device_train_batch_size": 1,
"gradient_accumulation_steps": 4,
"gradient_checkpointing": True,
"learning_rate": 2e-5,
}
training_args = TrainingArguments(**args)
trainer = Trainer(model=model, args=training_args, train_dataset=ds)
result = trainer.train()
print(result)
# Now we can save_pretrained() or push_to_hub() to share the trained model!
```
You may also find these [notebooks](https://huggingface.co/docs/transformers/notebooks) useful. Although they're not specific to HyenaDNA, they contain additional examples of training DNA and sequence classification models.
- [How to fine-tune a Nucleotide Transformer model](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/nucleotide_transformer_dna_sequence_modelling.ipynb)
- [How to fine-tune a model on text classification](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb)
### GPU requirements (suggested)
<a name="hardware"></a>
Here are suggestions on the hardware (preferred minimum) we think you can use for each model.
GPU during: Pretrain, fine-tune, inference
- [tiny-1k](https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen/tree/main): (T4, T4, T4)
- [small-32k](https://huggingface.co/LongSafari/hyenadna-small-32k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-160k](https://huggingface.co/LongSafari/hyenadna-medium-160k-seqlen/tree/main): (A100-40GB, T4, T4)
- [medium-450k](https://huggingface.co/LongSafari/hyenadna-medium-450k-seqlen/tree/main): (A100-40GB, A100-40GB, T4)
- [large-1m](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen/tree/main): (A100-80GB, A100-80GB, A100-40GB)
## Model & Training Overview
<a name="model"></a>
HyenaDNA uses a simple stack of [Hyena](https://arxiv.org/abs/2302.10866) operators, which are a subquadratic drop-in replacement for attention in Transformers. The Hyena operator is able to match quality in language modeling by using modified input projections, implicit convolutions and gating, all subquadratic operations.
This enables HyenaDNA to reach context lengths of up to 500x longer than previous genomic Transformer models using dense attention, and train 160x faster at sequence length 1M (compared to Flash Attention).
We use a single character tokenizer with a primary vocab of 4 nucleotides (plus special tokens), enabling the single nucleotide resolution, a first in genomic foundation models. In addition, the implicit long convolution enables a **global receptive field** at each layer.
We pretrain using next token (nucleotide) prediction on the human reference genome (HG38).
HyenaDNA sets new SotA on 23 downstream tasks including predicting regulatory elements, chromatin profiles, and species classification. We also explore what new capabilities open up with long context in genomics, including the first use of in-context learning with soft prompt tuneable tokens and instruction fine-tuning.
Check out our [blog](https://hazyresearch.stanford.edu/blog/2023-06-29-hyena-dna) for more details on HyenaDNA!
### Authors
Eric Nguyen*, Michael Poli*, Marjan Faizi*, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, Stefano Ermon, Stephen Baccus, Chris Re.
**Contact**
Eric Nguyen, [email protected]
Michael Poli, [email protected]
Marjan Faizi, [email protected]
## Citation
Feel free to cite us :)
```
@article{nguyen2023hyenadna,
title={HyenaDNA: Long-Range Genomic Sequence Modeling at Single Nucleotide Resolution},
author={Eric Nguyen and Michael Poli and Marjan Faizi and Armin Thomas and Callum Birch-Sykes and Michael Wornow and Aman Patel and Clayton Rabideau and Stefano Massaroli and Yoshua Bengio and Stefano Ermon and Stephen A. Baccus and Chris Ré},
year={2023},
eprint={2306.15794},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
hojzas/my-awesome-setfit-model
|
hojzas
| 2024-01-24T17:12:30Z | 48 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-24T17:12:14Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'a literate presentation that wonderfully weaves a murderous event in 1873
with murderous rage in 2002 . '
- text: 'an entertaining , colorful , action-filled crime story with an intimate heart
. '
- text: 'drops you into a dizzying , volatile , pressure-cooker of a situation that
quickly snowballs out of control , while focusing on the what much more than the
why . '
- text: 'the most compelling wiseman epic of recent years . '
- text: 'in the end , the movie collapses on its shaky foundation despite the best
efforts of director joe carnahan . '
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8612385321100917
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'stale and uninspired . '</li><li>"the film 's considered approach to its subject matter is too calm and thoughtful for agitprop , and the thinness of its characterizations makes it a failure as straight drama . ' "</li><li>"that their charm does n't do a load of good "</li></ul> |
| 1 | <ul><li>"broomfield is energized by volletta wallace 's maternal fury , her fearlessness "</li><li>'flawless '</li><li>'insightfully written , delicately performed '</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8612 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("hojzas/my-awesome-setfit-model")
# Run inference
preds = model("the most compelling wiseman epic of recent years . ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 11.4375 | 33 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.004 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Grekkla/MedChmtsStyleLORA
|
Grekkla
| 2024-01-24T17:09:07Z | 21 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:unknown",
"region:us"
] |
text-to-image
| 2024-01-24T16:43:11Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
character concept of a medieval soldier, he is wearing a platemail armor,
shoulderguards, pauldrons, shoulder armor, and a brown a leather kilt, in
the style of medchmts, white background <lora:medchmtsStyleSDXL-000003:1>
parameters:
negative_prompt: ' unaestheticXL_hk1'
output:
url: images/00000-2574209897.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: medchmts style
license: unknown
---
# MedchmtsStyle
<Gallery />
## Trigger words
You should use `medchmts style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Grekkla/MedChmtsStyleLORA/tree/main) them in the Files & versions tab.
|
PhilBinder/ft-llama-2-13b-imp-sub-ps
|
PhilBinder
| 2024-01-24T17:03:33Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T17:03:29Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Kooten/Buttercup-4x7B-4bpw-exl2
|
Kooten
| 2024-01-24T16:58:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T12:06:28Z |
---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---
# Buttercup-4x7B 4bpw
## Description
Exllama quant of [Kquant03/Buttercup-4x7B-bf16](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/Buttercup-4x7B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Buttercup-4x7B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Buttercup-4x7B-4bpw-exl2)
## Prompt format:
Unclear.. they do not mention it and at least one of the merged models mentions multiple
## Contact
Kooten on discord
|
Kooten/Buttercup-4x7B-5bpw-exl2
|
Kooten
| 2024-01-24T16:57:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T12:06:41Z |
---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---
# Buttercup-4x7B 5bpw
## Description
Exllama quant of [Kquant03/Buttercup-4x7B-bf16](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16)
## Other quants:
EXL2: [6bpw](https://huggingface.co/Kooten/Buttercup-4x7B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/Buttercup-4x7B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/Buttercup-4x7B-4bpw-exl2)
## Prompt format:
Unclear.. they do not mention it and at least one of the merged models mentions multiple
## Contact
Kooten on discord
|
rheubanks/mistral7b_instruct_generation
|
rheubanks
| 2024-01-24T16:49:48Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T16:49:44Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral7b_instruct_generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral7b_instruct_generation
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 0.03
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7815 | 0.0 | 20 | 1.8296 |
| 1.8199 | 0.01 | 40 | 1.7992 |
| 1.7606 | 0.01 | 60 | 1.7950 |
| 1.8347 | 0.01 | 80 | 1.7923 |
| 1.7433 | 0.01 | 100 | 1.7958 |
| 1.8829 | 0.02 | 120 | 1.7928 |
| 1.769 | 0.02 | 140 | 1.7893 |
| 1.7817 | 0.02 | 160 | 1.7849 |
| 1.7975 | 0.03 | 180 | 1.7881 |
| 2.008 | 0.03 | 200 | 1.7882 |
| 1.827 | 0.03 | 220 | 1.7993 |
| 1.8336 | 0.03 | 240 | 1.7953 |
| 1.8757 | 0.04 | 260 | 1.7916 |
| 1.9317 | 0.04 | 280 | 1.7900 |
| 1.8708 | 0.04 | 300 | 1.7867 |
| 1.8851 | 0.04 | 320 | 1.7928 |
| 1.94 | 0.05 | 340 | 1.7880 |
| 1.7749 | 0.05 | 360 | 1.8033 |
| 1.8647 | 0.05 | 380 | 1.7870 |
| 1.8468 | 0.06 | 400 | 1.7871 |
| 1.8341 | 0.06 | 420 | 1.7890 |
| 1.9152 | 0.06 | 440 | 1.7892 |
| 1.7979 | 0.06 | 460 | 1.8051 |
| 1.9065 | 0.07 | 480 | 1.7986 |
| 1.8011 | 0.07 | 500 | 1.7939 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ozymandia5/lionshehe
|
ozymandia5
| 2024-01-24T16:48:37Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-24T16:36:35Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### lionshehe Dreambooth model trained by ozymandia5 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 27
Sample pictures of this concept:

|
fabozzi/Llama-2-7b-UpDown
|
fabozzi
| 2024-01-24T16:35:52Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-24T16:35:37Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
mobydoby/ppo-Huggy
|
mobydoby
| 2024-01-24T16:34:27Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-24T16:34:21Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mobydoby/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-GGUF
|
LoneStriker
| 2024-01-24T16:24:02Z | 277 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"fi",
"license:other",
"region:us"
] |
text-generation
| 2024-01-24T04:08:34Z |
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
- fi
pipeline_tag: text-generation
inference: false
library_name: transformers
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base
License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL)
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
iknoor/UTDRM-RoBERTa
|
iknoor
| 2024-01-24T16:19:00Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-01-24T16:18:43Z |
---
license: mit
---
# {UTDRM-RoBERTa}
This is the UTDRM-RoBERTa model from the paper UTDRM: Unsupervised Method for Training Debunked-narrative Retrieval Models.
Please consider citing the following paper if you use this model.
```
@article{singh2023utdrm,
title={UTDRM: unsupervised method for training debunked-narrative retrieval models},
author={Singh, Iknoor and Scarton, Carolina and Bontcheva, Kalina},
journal={EPJ Data Science},
volume={12},
number={1},
pages={59},
year={2023},
publisher={Springer}
}
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
|
Ivan0831/PPO-LunarLander-V5
|
Ivan0831
| 2024-01-24T16:17:17Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T16:05:28Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 143.23 +/- 89.84
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.001
'num_envs': 8
'num_steps': 512
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 32
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.1
'clip_vloss': True
'ent_coef': 0.005
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'Ivan0831/PPO-LunarLander-V5'
'batch_size': 4096
'minibatch_size': 128}
```
|
eanderson/xlm-roberta-large-qa_norwegian
|
eanderson
| 2024-01-24T16:12:03Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-24T14:56:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-large-qa_norwegian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-qa_norwegian
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5804 | 1.0 | 1024 | 2.8475 |
| 2.2901 | 2.0 | 2048 | 2.3869 |
| 1.2687 | 3.0 | 3072 | 2.8818 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
jfmatos-isq/xlm-roberta-base-finetuned-panx-de
|
jfmatos-isq
| 2024-01-24T16:11:43Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-23T16:35:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8597727272727272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1363
- F1: 0.8598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2552 | 1.0 | 525 | 0.1783 | 0.8162 |
| 0.1286 | 2.0 | 1050 | 0.1390 | 0.8473 |
| 0.0821 | 3.0 | 1575 | 0.1363 | 0.8598 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.