modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
dlibf/zephyr-7b-dpo-full_sft3epoch
|
dlibf
| 2024-01-26T04:54:18Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T02:23:44Z |
---
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full_sft3epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full_sft3epoch
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6172
- Rewards/chosen: -1.5792
- Rewards/rejected: -1.8655
- Rewards/accuracies: 0.625
- Rewards/margins: 0.2863
- Logps/rejected: -1146.1112
- Logps/chosen: -1218.1312
- Logits/rejected: -3.6422
- Logits/chosen: -3.6317
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6601 | 0.21 | 100 | 0.6572 | -0.5696 | -0.7082 | 0.6133 | 0.1387 | -1030.3876 | -1117.1681 | -3.8281 | -3.8175 |
| 0.6329 | 0.42 | 200 | 0.6378 | -1.1629 | -1.3547 | 0.6523 | 0.1918 | -1095.0327 | -1176.4983 | -3.7205 | -3.7123 |
| 0.6251 | 0.63 | 300 | 0.6219 | -1.5227 | -1.7758 | 0.6484 | 0.2530 | -1137.1422 | -1212.4856 | -3.6798 | -3.6707 |
| 0.6163 | 0.84 | 400 | 0.6192 | -1.4583 | -1.7357 | 0.6289 | 0.2774 | -1133.1334 | -1206.0380 | -3.6473 | -3.6358 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
csukuangfj/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2
|
csukuangfj
| 2024-01-26T04:33:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-26T04:13:32Z |
This repo is forked from
https://huggingface.co/luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/355
And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
# Pre-trained Transducer-Stateless2 models for the Aidatatang_200zh dataset with icefall.
The model was trained on full [Aidatatang_200zh](https://www.openslr.org/62) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/aidatatang_200zh/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1"
./pruned_transducer_stateless2/train.py \
--world-size 2 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless2/exp \
--lang-dir data/lang_char \
--max-duration 250
```
## Evaluation results
The decoding results (WER%) on Aidatatang_200zh(dev and test) are listed below, we got this result by averaging models from epoch 11 to 29.
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 5.53 | 6.59 | --epoch 29, --avg 19, --max-duration 100 |
| modified beam search (beam size 4) | 5.28 | 6.32 | --epoch 29, --avg 19, --max-duration 100 |
| fast beam search (set as default) | 5.29 | 6.33 | --epoch 29, --avg 19, --max-duration 1500|
|
e22vvb/EN_mt5-base_5_wikiSQL
|
e22vvb
| 2024-01-26T04:31:56Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T17:50:32Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: EN_mt5-base_5_wikiSQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_mt5-base_5_wikiSQL
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0907
- Rouge2 Precision: 0.8556
- Rouge2 Recall: 0.7785
- Rouge2 Fmeasure: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.156 | 1.0 | 4049 | 0.1163 | 0.8282 | 0.7534 | 0.7831 |
| 0.1218 | 2.0 | 8098 | 0.1007 | 0.8452 | 0.7679 | 0.7989 |
| 0.1056 | 3.0 | 12147 | 0.0944 | 0.8521 | 0.7749 | 0.8058 |
| 0.0967 | 4.0 | 16196 | 0.0921 | 0.8552 | 0.7784 | 0.8092 |
| 0.0935 | 5.0 | 20245 | 0.0907 | 0.8556 | 0.7785 | 0.8095 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
cris177/DesivoMerge0.1
|
cris177
| 2024-01-26T04:31:06Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T03:29:00Z |
---
license: apache-2.0
tags:
- merge
- mergekit
---
# DesivoMerge0.1
DesivoMerge0.1 is a merge of a bunch of models using [mergekit](https://github.com/cg123/mergekit)
The idea is to continuously merge models into a main model. The first merge is between open-orca-mistral-7B and open-hermes-7B, then I merged the resulting merge with the best performing 7B model on the open-llm leaderboard (TurdusBeagle-7B).
I will keep adding models to the merge until the average score of the models in the merge is lower than the score of the previous merge, in which case I will backtrack and find another model to merge.
I will try to avoid contaminated models by looking into each of the candidates before merging them.
## 🧩 Configuration
```yaml
slices:
- sources:
- model: ./merge
layer_range: [0, 32]
- model: Azazelle/Argetsu
layer_range: [0, 32]
merge_method: slerp
base_model: ./merge
tokenizer_source: base
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
royallab/Buttercup-4x7B-exl2
|
royallab
| 2024-01-26T04:25:54Z | 1 | 1 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T23:03:38Z |
---
license: apache-2.0
language:
- en
---
## Information
This is a Exl2 quantized version of [Buttercup-4x7B-bf16](https://huggingface.co/Kquant03/Buttercup-4x7B-bf16)
Please refer to the original creator for more information.
Calibration dataset: Exllamav2 default
## Branches:
- main: Measurement files
- 4bpw: 4 bits per weight
- 5bpw: 5 bits per weight
- 6bpw: 6 bits per weight
## Notes
- 6bpw is recommended for the best quality to vram usage ratio (assuming you have enough vram).
- Please ask for more bpws in the community tab if necessary.
## Run in TabbyAPI
TabbyAPI is a pure exllamav2 FastAPI server developed by us. You can find TabbyAPI's source code here: [https://github.com/theroyallab/TabbyAPI](https://github.com/theroyallab/TabbyAPI)
If you don't have huggingface-cli, please run `pip install huggingface_hub`.
To run this model, follow these steps:
1. Make a directory inside your models folder called `Buttercup-4x7B-bf16-exl2`
2. Open a terminal inside your models folder
3. Run `huggingface-cli download royallab/Buttercup-4x7B-bf16-exl2 --revision 6.0bpw-h6 --local-dir Buttercup-4x7B-bf16-exl2 --local-dir-use-symlinks False`
1. The `--revision` flag corresponds to the branch name on the model repo. Please select the appropriate bpw branch for your system.
4. Inside TabbyAPI's config.yml, set `model_name` to `Buttercup-4x7B-bf16-exl2` or you can use the `/model/load` endpoint after launching.
5. Launch TabbyAPI inside your python env by running `python main.py`
## Donate?
All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/doctorshotgun
You should not feel obligated to donate, but if you do, I'd appreciate it.
---
|
LoneStriker/Snorkel-Mistral-PairRM-DPO-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-26T04:11:02Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset",
"arxiv:2305.18290",
"arxiv:2306.02561",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:38:56Z |
---
license: apache-2.0
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
pipeline_tag: text-generation
---
We offer a temporary HF space for everyone to try out the model: -> [**Snorkel-Mistral-PairRM-DPO Space**](https://huggingface.co/spaces/snorkelai/snorkelai_mistral_pairrm_dpo_text_inference)
We also provide an inference endpoint for everyone to test the model.
It may initially take a few minutes to activate, but will eventually operate at the standard speed of HF's 7B model text inference endpoint.
The speed of inference depends on HF endpoint performance and is not related to Snorkel offerings.
This endpoint is designed for initial trials, not for ongoing production use. Have fun!
```
import requests
API_URL = "https://t1q6ks6fusyg1qq7.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
"Accept" : "application/json",
"Content-Type": "application/json"
}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "[INST] Recommend me some Hollywood movies [/INST]",
"parameters": {}
})
```
### Dataset:
Training dataset: [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
### Methodology:
1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
4. Use this LLM as the base model for the next iteration, repeating three times in total.
This overview provides a high-level summary of our approach.
We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/)
The prompt format follows the Mistral model:
```[INST] {prompt} [/INST]```
### Training recipe:
- The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
We executed the n_th DPO iteration using the "train/test_iteration_{n}".
### Key Premises:
- **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
- **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
- **Alignment Recipe**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes.
### Applications:
Unlike our customers, who have very specific use cases to align LLMs to,
the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow user instructions.
With this demonstration, we focus on the general approach to alignment.
Thus, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
For interest in building your **specialized internal reward models
that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
[**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
to learn more about "Programmatically scaling human preferences and alignment in GenAI".
### Result:
On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
- The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
After applying the above methodology:
- This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.
- When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and selecting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
The Alpaca-Eval 2.0 evaluator, "gpt-4-turbo," exhibits a bias towards longer responses.
This tendency might also be present in our chosen reward model, resulting in our model producing lengthier responses after DPO iterations,
which can be among the factors to our higher ranks on the leaderboard.
Future work could include measures to control response length and other relevant metrics.
### Limitations:
The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
It does not have any moderation mechanisms.
We look forward to continuing to engage with the research community and our customers exploring optimal methods for getting models to respect guardrails,
allowing for deployment in environments requiring moderated outputs.
### Contemporary Work and Acknowledgements:
- The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
- The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
- The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
- The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
- We would also like to acknowledge contemporary work published independently on arXiv on 2024-01-18 by Meta & NYU (Yuan, et al) in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
which proposes a similar general approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model.
While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
### The Snorkel AI Team
Hoang Tran, Chris Glaze, Braden Hancock
|
varun-v-rao/t5-base-bn-adapter-1.79M-snli
|
varun-v-rao
| 2024-01-26T04:07:31Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T21:57:02Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-bn-adapter-1.79M-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-bn-adapter-1.79M-snli
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7025
- Accuracy: 0.751
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4087 | 1.0 | 8584 | 0.3305 | 0.8753 |
| 0.3819 | 2.0 | 17168 | 0.3120 | 0.8844 |
| 0.3661 | 3.0 | 25752 | 0.3068 | 0.8858 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kiwakwok/fine-tuned-roberta-xlarge
|
kiwakwok
| 2024-01-26T04:06:41Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-18T07:42:19Z |
---
license: mit
---
This model is based on RoBERTa-xlarge (layers = 36, hidden sizes = 1536), fine tuned using the following datasets:
- CMRC2018
- DRCD
- Squad Zen v1.0
This is created for research purpose only.
|
JandC/donut-base-full_text
|
JandC
| 2024-01-26T04:05:41Z | 33 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-01-25T22:59:17Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-full_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-full_text
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
csukuangfj/icefall_asr_wenetspeech_pruned_transducer_stateless2
|
csukuangfj
| 2024-01-26T04:00:23Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2024-01-26T03:47:56Z |
This repo is forked from
https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/349
# Pre-trained Transducer-Stateless2 models for the WenetSpeech dataset with icefall.
The model was trained on the L subset of WenetSpeech with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/wenetspeech/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
./pruned_transducer_stateless2/train.py \
--world-size 8 \
--num-epochs 15 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless2/exp \
--lang-dir data/lang_char \
--max-duration 180 \
--valid-interval 3000 \
--model-warm-step 3000 \
--save-every-n 8000 \
--training-subset L
```
## Evaluation results
The decoding results (WER%) on WenetSpeech(dev, test-net and test-meeting) are listed below, we got this result by averaging models from epoch 9 to 10.
The WERs are
| | dev | test-net | test-meeting | comment |
|------------------------------------|-------|----------|--------------|------------------------------------------|
| greedy search | 7.80 | 8.75 | 13.49 | --epoch 10, --avg 2, --max-duration 100 |
| modified beam search (beam size 4) | 7.76 | 8.71 | 13.41 | --epoch 10, --avg 2, --max-duration 100 |
| fast beam search (1best) | 7.94 | 8.74 | 13.80 | --epoch 10, --avg 2, --max-duration 1500 |
| fast beam search (nbest) | 9.82 | 10.98 | 16.37 | --epoch 10, --avg 2, --max-duration 600 |
| fast beam search (nbest oracle) | 6.88 | 7.18 | 11.77 | --epoch 10, --avg 2, --max-duration 600 |
| fast beam search (nbest LG) | 14.94 | 16.14 | 22.93 | --epoch 10, --avg 2, --max-duration 600 |
|
LoneStriker/Snorkel-Mistral-PairRM-DPO-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-26T03:57:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"conversational",
"dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset",
"arxiv:2305.18290",
"arxiv:2306.02561",
"arxiv:2401.10020",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:29:29Z |
---
license: apache-2.0
datasets:
- snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset
pipeline_tag: text-generation
---
We offer a temporary HF space for everyone to try out the model: -> [**Snorkel-Mistral-PairRM-DPO Space**](https://huggingface.co/spaces/snorkelai/snorkelai_mistral_pairrm_dpo_text_inference)
We also provide an inference endpoint for everyone to test the model.
It may initially take a few minutes to activate, but will eventually operate at the standard speed of HF's 7B model text inference endpoint.
The speed of inference depends on HF endpoint performance and is not related to Snorkel offerings.
This endpoint is designed for initial trials, not for ongoing production use. Have fun!
```
import requests
API_URL = "https://t1q6ks6fusyg1qq7.us-east-1.aws.endpoints.huggingface.cloud"
headers = {
"Accept" : "application/json",
"Content-Type": "application/json"
}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "[INST] Recommend me some Hollywood movies [/INST]",
"parameters": {}
})
```
### Dataset:
Training dataset: [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset)
We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
### Methodology:
1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
4. Use this LLM as the base model for the next iteration, repeating three times in total.
This overview provides a high-level summary of our approach.
We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog.](https://snorkel.ai/blog/)
The prompt format follows the Mistral model:
```[INST] {prompt} [/INST]```
### Training recipe:
- The provided data is formatted to be compatible with the Hugging Face's [Zephyr recipe](https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta).
We executed the n_th DPO iteration using the "train/test_iteration_{n}".
### Key Premises:
- **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
- **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
- **Alignment Recipe**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes.
### Applications:
Unlike our customers, who have very specific use cases to align LLMs to,
the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow user instructions.
With this demonstration, we focus on the general approach to alignment.
Thus, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
For interest in building your **specialized internal reward models
that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
[**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
to learn more about "Programmatically scaling human preferences and alignment in GenAI".
### Result:
On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
- The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
After applying the above methodology:
- This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.
- When post-processing the model outputs with PairRM-best-of-16, which involved generating 16 responses and selecting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
The Alpaca-Eval 2.0 evaluator, "gpt-4-turbo," exhibits a bias towards longer responses.
This tendency might also be present in our chosen reward model, resulting in our model producing lengthier responses after DPO iterations,
which can be among the factors to our higher ranks on the leaderboard.
Future work could include measures to control response length and other relevant metrics.
### Limitations:
The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
It does not have any moderation mechanisms.
We look forward to continuing to engage with the research community and our customers exploring optimal methods for getting models to respect guardrails,
allowing for deployment in environments requiring moderated outputs.
### Contemporary Work and Acknowledgements:
- The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
- The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
- The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
- The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
- We would also like to acknowledge contemporary work published independently on arXiv on 2024-01-18 by Meta & NYU (Yuan, et al) in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
which proposes a similar general approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model.
While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
### The Snorkel AI Team
Hoang Tran, Chris Glaze, Braden Hancock
|
REDLOK/Redlok
|
REDLOK
| 2024-01-26T03:37:25Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"chemistry",
"text-generation",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-26T03:36:19Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- chemistry
---
|
silk-road/Haruhi-Dialogue-Speaker-Extract_qwen18
|
silk-road
| 2024-01-26T03:29:34Z | 276 | 3 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"feature-extraction",
"text-generation-inference",
"custom_code",
"zh",
"en",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2024-01-26T01:09:21Z |
---
license: apache-2.0
language:
- zh
- en
tags:
- text-generation-inference
---
# Chat凉宫春日的对话抽取模型
我们希望有一个模型能够从小说的chunk中批量去提取摘要和对话
这个模型就是实现了这一点。模型使用了大约30k的中文小说数据和20k的英文小说数据进行训练,在qwen-1.8上进行了3个epoch的finetune。 原则上模型同时支持中文和英文小说的抽取
主项目链接 https://github.com/LC1332/Chat-Haruhi-Suzumiya
- [李鲁鲁](https://github.com/LC1332)完成了数据的收集,以及进一步将inference程序扩展到连续的chunks
- [刘崇寒](https://github.com/khazic)完成了模型的训练
- [米唯实](https://github.com/hhhwmws0117)测试并上传模型到hugging face
# Chat Haruhi Suzumiya's Dialogue Extraction Model
We hope to have a model that can extract summaries and dialogues in batches from chunks of novels.
This model achieves just that. It was trained using approximately 30k Chinese novels and 20k English novels, and was fine-tuned on qwen-1.8 for three epochs. In principle, the model supports extracting for both Chinese and English novels.
Main project link: https://github.com/LC1332/Chat-Haruhi-Suzumiya
# Inference Code
https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/Dialogue_Speaker_Extract_Test.ipynb
```python
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("silk-road/Haruhi-Dialogue-Speaker-Extract_qwen18", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("silk-road/Haruhi-Dialogue-Speaker-Extract_qwen18", device_map="auto", trust_remote_code=True)
sys_prompt = "给定input paragraph,抽取其中的对话,并输出为json格式 Let's think it step by step 1. summarize input paragraph into bullet format,存储在summary字段 2. 抽取每一句对话的内容 dialogue,判断每一句话的说话人 said by, 存储在conversations中"
text = "Your novel text"
response_str, history = model.chat(tokenizer, text, history=[], system=sys_prompt)
```
# Official Prompt
Chinese:
```
给定input paragraph,抽取其中的对话,并输出为json格式 Let's think it step by step 1. summarize input paragraph into bullet format,存储在summary字段 2. 抽取每一句对话的内容 dialogue,判断每一句话的说话人 said by, 存储在conversations中
```
English:
```
Given an input paragraph, extract the dialogues within it, and output them in JSON format.
Let's think about it step by step:
- Summarize the input paragraph into bullet points and store it in the 'summary' field.
- Extract the content of each dialogue ('dialogue'), identify the speaker for each sentence ('said by'), and store these in 'conversations'.
```
### TODO
- [x] 拓展到多chunks的inference
- [x] 提供英语的例子
- [ ] 提供一个多章节并行inference的例子
- [ ] 在json解析失败的时候尝试直接从raw字符串提取summary
- [ ] 在失败的时候额外尝试调用openai进行推理
### TODO
- [x] Expand to multi-chunk inference
- [x] Provide an English example
- [ ] Provide an example of multi-chapter parallel inference
- [ ] Try extracting summary directly from raw strings when JSON parsing fails
- [ ] Additionally attempt to use OpenAI for inference when failing
## Chinese Output Example
```
{'summary': '- 彭蠡不在家中,老刀感到担忧并等待着彭蠡回家的时间,同时观察周围环境和人们的消费行为,表现出内心的饥饿感和焦虑情绪。', 'conversations': [{'dialogue': '哎,你们知道那儿一盘回锅肉多少钱吗?', 'said_by': '小李'}, {'dialogue': '靠,菜里有沙子。', 'said_by': '小丁'}, {'dialogue': '人家那儿一盘回锅肉,就三百四。', 'said_by': '小李'}, {'dialogue': '什么玩意?这么贵。', 'said_by': '小丁'}, {'dialogue': '你吃不了这么多。', 'said_by': '小李'}]}
{'summary': '- 彭蠡在家等待彭蠡回家,表现出内心的饥饿感和焦虑情绪,同时对彭蠡的行为表示不满和失望。彭蠡则对老刀的行为表现出冷漠和不屑的态度。', 'conversations': [{'dialogue': '我没时间和你解释。我需要去第一空间,你告诉我怎么走。', 'said_by': '老刀'}, {'dialogue': '回我家说,要走也从那儿走。', 'said_by': '彭蠡'}, {'dialogue': '回家啦,回家啦。转换马上开始了。', 'said_by': '车上的人'}, {'dialogue': '你不告诉我为什么,我就不告诉你怎么走。', 'said_by': '彭蠡'}, {'dialogue': '你躲在垃圾道里?去第二空间?那你得等24小时啊。', 'said_by': '彭蠡'}, {'dialogue': '二十万块。等一礼拜也值啊。', 'said_by': '老刀'}, {'dialogue': '你就这么缺钱花?', 'said_by': '彭蠡'}, {'dialogue': '糖糖还有一年多该去幼儿园了。我来不及了。', 'said_by': '老刀'}, {'dialogue': '你别说了。', 'said_by': '彭蠡'}]}
{'summary': '- 彭蠡对彭蠡的行为表现出不满和失望,同时对老刀的行为表现出冷漠和不屑的态度。', 'conversations': [{'dialogue': '你真是作死,她又不是你闺女,犯得着吗。', 'said_by': '彭蠡'}, {'dialogue': '别说这些了。快告我怎么走。', 'said_by': '老刀'}, {'dialogue': '你可得知道,万一被抓着,可不只是罚款,得关上好几个月。', 'said_by': '彭蠡'}, {'dialogue': '你不是去过好多次吗?', 'said_by': '老刀'}, {'dialogue': '只有四次。第五次就被抓了。', 'said_by': '彭蠡'}, {'dialogue': '那也够了。我要是能去四次,抓一次也无所谓。', 'said_by': '老刀'}, {'dialogue': '别说了。你要是真想让我带你去,我就带你去。', 'said_by': '彭蠡'}]}
- 彭蠡不在家中,老刀感到担忧并等待着彭蠡回家的时间,同时观察周围环境和人们的消费行为,表现出内心的饥饿感和焦虑情绪。
小李 : 哎,你们知道那儿一盘回锅肉多少钱吗?
小丁 : 靠,菜里有沙子。
小李 : 人家那儿一盘回锅肉,就三百四。
小丁 : 什么玩意?这么贵。
小李 : 你吃不了这么多。
- 彭蠡在家等待彭蠡回家,表现出内心的饥饿感和焦虑情绪,同时对彭蠡的行为表示不满和失望。彭蠡则对老刀的行为表现出冷漠和不屑的态度。
老刀 : 我没时间和你解释。我需要去第一空间,你告诉我怎么走。
彭蠡 : 回我家说,要走也从那儿走。
车上的人 : 回家啦,回家啦。转换马上开始了。
彭蠡 : 你不告诉我为什么,我就不告诉你怎么走。
彭蠡 : 你躲在垃圾道里?去第二空间?那你得等24小时啊。
老刀 : 二十万块。等一礼拜也值啊。
彭蠡 : 你就这么缺钱花?
老刀 : 糖糖还有一年多该去幼儿园了。我来不及了。
彭蠡 : 你别说了。
- 彭蠡对彭蠡的行为表现出不满和失望,同时对老刀的行为表现出冷漠和不屑的态度。
彭蠡 : 你真是作死,她又不是你闺女,犯得着吗。
老刀 : 别说这些了。快告我怎么走。
彭蠡 : 你可得知道,万一被抓着,可不只是罚款,得关上好几个月。
老刀 : 你不是去过好多次吗?
彭蠡 : 只有四次。第五次就被抓了。
老刀 : 那也够了。我要是能去四次,抓一次也无所谓。
彭蠡 : 别说了。你要是真想让我带你去,我就带你去。
```
## English Output Example
```
{'summary': "Snow-covered Paris, Kimura's workshop, artist and viewer engaging in conversation.", 'conversations': [{'dialogue': 'You should hear the stories they tell of you at the café. If Émile is to be believed, you arrived here as an ukiyo-e courtesan, nothing more than paper wrapped around a porcelain bowl. A painter—he will not say which of us it was, of course—bought the bowl and the print along with it.', 'said_by': 'Artist'}, {'dialogue': 'And the painter pulled me from the print with the sheer force of his imagination, I’m sure. Émile is a novelist and can hardly be trusted to give an accurate account. The reality of my conception is vastly more mundane, I assure you…though it does involve a courtesan.', 'said_by': 'Woman'}, {'dialogue': 'A grain of truth makes for the best fiction. nude, but leave the jewelry and the shoes. I’ll paint you on the chaise. We’ll have three hours in the proper light, and I will pay you four francs.', 'said_by': 'Artist'}, {'dialogue': 'Victorine gets five!', 'said_by': 'Woman'}, {'dialogue': 'Victorine is a redhead.', 'said_by': 'Artist'}, {'dialogue': 'My name is Mariko, by the way, but everyone calls me Mari.', 'said_by': 'Mariko'}]}
{'summary': "Snow-covered Paris, Kimura's workshop, artist and viewer engaged in conversation. Artist and viewer engage in intimate conversation and interaction.", 'conversations': [{'dialogue': 'I’m on the chaise', 'said_by': 'Artist'}, {'dialogue': 'Bring your left hip forward. No, not that far. Bend the leg a bit more, yes. Turn your head to face the canvas.', 'said_by': 'Artist'}, {'dialogue': 'Like a Manet', 'said_by': 'Artist'}, {'dialogue': 'Don’t like a model that talks while you work, huh?', 'said_by': 'Artist'}, {'dialogue': 'I don’t like being compared to other artists.', 'said_by': 'Artist'}, {'dialogue': 'Then you must paint me so well that I forget about the others.', 'said_by': 'Artist'}, {'dialogue': 'Tilt your head into the light. And look at me intently. Intently. As though I were the one naked on the chaise.', 'said_by': 'Artist'}, {'dialogue': 'You did better than I would have expected.', 'said_by': 'Artist'}, {'dialogue': 'There are other poses I could show you, if you like?', 'said_by': 'Artist'}, {'dialogue': 'But the sooner I get started on this portrait, the better.', 'said_by': 'Artist'}]}
{'summary': "Kimura's workshop, artist and viewer engaging in intimate conversation and interaction. Kimura responds with a strong, cold embrace, leading to a passionate physical exchange. Afterward, the artist falls asleep, leaving the narrator feeling incomplete and longing.", 'num': 14, 'conversations': [{'dialogue': 'I could show you other poses.', 'said_by': 'Kimura'}, {'dialogue': 'Yes.', 'said_by': 'Kimura'}, {'dialogue': 'See you tomorrow?', 'said_by': 'Artist'}]}
Snow-covered Paris, Kimura's workshop, artist and viewer engaging in conversation.
Artist : You should hear the stories they tell of you at the café. If Émile is to be believed, you arrived here as an ukiyo-e courtesan, nothing more than paper wrapped around a porcelain bowl. A painter—he will not say which of us it was, of course—bought the bowl and the print along with it.
Woman : And the painter pulled me from the print with the sheer force of his imagination, I’m sure. Émile is a novelist and can hardly be trusted to give an accurate account. The reality of my conception is vastly more mundane, I assure you…though it does involve a courtesan.
Artist : A grain of truth makes for the best fiction. nude, but leave the jewelry and the shoes. I’ll paint you on the chaise. We’ll have three hours in the proper light, and I will pay you four francs.
Woman : Victorine gets five!
Artist : Victorine is a redhead.
Mariko : My name is Mariko, by the way, but everyone calls me Mari.
Snow-covered Paris, Kimura's workshop, artist and viewer engaged in conversation. Artist and viewer engage in intimate conversation and interaction.
Artist : I’m on the chaise
Artist : Bring your left hip forward. No, not that far. Bend the leg a bit more, yes. Turn your head to face the canvas.
Artist : Like a Manet
Artist : Don’t like a model that talks while you work, huh?
Artist : I don’t like being compared to other artists.
Artist : Then you must paint me so well that I forget about the others.
Artist : Tilt your head into the light. And look at me intently. Intently. As though I were the one naked on the chaise.
Artist : You did better than I would have expected.
Artist : There are other poses I could show you, if you like?
Artist : But the sooner I get started on this portrait, the better.
Kimura's workshop, artist and viewer engaging in intimate conversation and interaction. Kimura responds with a strong, cold embrace, leading to a passionate physical exchange. Afterward, the artist falls asleep, leaving the narrator feeling incomplete and longing.
Kimura : I could show you other poses.
Kimura : Yes.
Artist : See you tomorrow?
```
|
CheriTangerine/Scoups
|
CheriTangerine
| 2024-01-26T03:28:03Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-26T03:25:42Z |
---
license: other
license_name: scoups
license_link: LICENSE
---
|
liudoujiang/liudoujiang
|
liudoujiang
| 2024-01-26T02:50:52Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-26T02:48:36Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ksdksu/liudoujiang
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-4
|
kanishka
| 2024-01-26T02:45:27Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T03:36:04Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_all-1e-4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3725
- Accuracy: 0.4068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0248 | 1.0 | 19014 | 4.2224 | 0.3118 |
| 3.5712 | 2.0 | 38028 | 3.6678 | 0.3643 |
| 3.3951 | 3.0 | 57042 | 3.5394 | 0.3804 |
| 3.2986 | 4.0 | 76056 | 3.4576 | 0.3890 |
| 3.2317 | 5.0 | 95070 | 3.4292 | 0.3932 |
| 3.1832 | 6.0 | 114084 | 3.3886 | 0.3968 |
| 3.1425 | 7.0 | 133098 | 3.3770 | 0.3993 |
| 3.1062 | 8.0 | 152112 | 3.3730 | 0.4013 |
| 3.0733 | 9.0 | 171126 | 3.3467 | 0.4030 |
| 3.0496 | 10.0 | 190140 | 3.3508 | 0.4038 |
| 3.0261 | 11.0 | 209154 | 3.3645 | 0.4039 |
| 3.003 | 12.0 | 228168 | 3.3508 | 0.4050 |
| 2.9808 | 13.0 | 247182 | 3.3479 | 0.4059 |
| 2.9585 | 14.0 | 266196 | 3.3544 | 0.4059 |
| 2.9413 | 15.0 | 285210 | 3.3535 | 0.4063 |
| 2.9256 | 16.0 | 304224 | 3.3666 | 0.4058 |
| 2.9089 | 17.0 | 323238 | 3.3630 | 0.4067 |
| 2.8949 | 18.0 | 342252 | 3.3693 | 0.4064 |
| 2.8723 | 19.0 | 361266 | 3.3718 | 0.4067 |
| 2.8605 | 20.0 | 380280 | 3.3725 | 0.4068 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|
varun-v-rao/t5-base-mnli-model1
|
varun-v-rao
| 2024-01-26T02:38:01Z | 37 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T23:57:56Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-mnli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-mnli-model1
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9030
- Accuracy: 0.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3393 | 1.0 | 6136 | 0.4004 | 0.8617 |
| 0.3064 | 2.0 | 12272 | 0.3921 | 0.8662 |
| 0.2836 | 3.0 | 18408 | 0.4004 | 0.8655 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
jlbaker361/dcgan-wikiart100
|
jlbaker361
| 2024-01-26T02:37:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-25T03:37:42Z |
---
{}
---
Creative Adversarial Network
epochs: 50
dataset jlbaker361/wikiart-balanced100
n classes 27
batch_size 4
images where resized to 512
and then center cropped to: 512
used clip=False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
JKuang96/cartpole
|
JKuang96
| 2024-01-26T02:24:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-26T02:24:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
caelum12/ddpm-celebahq-finetuned-butterflies-2epochs
|
caelum12
| 2024-01-26T02:14:59Z | 44 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-01-26T02:14:42Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('caelum12/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
seungminh/zero123-pro_v1.0
|
seungminh
| 2024-01-26T02:13:44Z | 0 | 3 | null |
[
"3d",
"text-to-3d",
"dataset:allenai/objaverse-xl",
"license:mit",
"region:us"
] |
text-to-3d
| 2024-01-26T00:23:17Z |
---
datasets:
- allenai/objaverse-xl
tags:
- 3d
extra_gated_fields:
Name: text
Email: text
Country: text
Organization or Affiliation: text
I ALLOW Stability AI to email me about new model releases: checkbox
license: mit
license_name: sai-nc-community
pipeline_tag: text-to-3d
---
# Zero123-pro_v1
## Model Description
Zero123-pro is a fine-tuned model for *high-resolution* view-conditioned image generation based on [Zero123](https://github.com/cvlab-columbia/zero123).
Currently, our model is pursuing 512x512 resolution and we are still trying to find a best way to train high-resolution because convergence is not easy.
This model is currently fine-tuned only with *chair* dataset, but a foundation model suitable for e-commerce will be released later.
## Usage
Use the config file modified from an original zero123 code base.
Our model has an output resolution of 512, and the corresponding latent dimension is 64. Therefore, first_stage_config resolution should be corrected to 512 and image_size to 64.
To get good quality, please use image of 1:1 aspect ratio as an input.
## Model Details
* **Developed by**: Seungmin Ha, Yeonju Kim
* **Model type**: latent diffusion model.
* **Finetuned from model**: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
* **License**: We released 1st. version of Zero123-pro.
* Some of the data used in **Zero123-pro** cannot be used for commercial purposes, but it can be used for research purposes.
### Training Infrastructure
* **Hardware**: `Zero123-pro` was trained on the cluster on a single node with 8 A100 80GiBs GPUs.
* **Code Base**: We use our modified version of [the original zero123 repository](https://github.com/cvlab-columbia/zero123).
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
EmbeddedLLM/Medusa2-Mistral-7B-Instruct-v0.2
|
EmbeddedLLM
| 2024-01-26T02:11:36Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"medusa",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T05:20:51Z |
---
license: apache-2.0
tags:
- medusa
---
# Model Description
This is a Medusa model for Mistral 7B Instruct v0.2.
This is trained using the latest Medusa 2 commit.
## Training:
* Dataset used is the self distillation dataset from Mistral 7B Instruct v0.2, temperature 0.3 with output token of 2048.
* It has been trained using axolotl fork as describe in Medusa 2 README.md
## Inference:
* To load the model please follow the instruction found in [Github](https://github.com/FasterDecoding/Medusa?tab=readme-ov-file)
|
linhcuem/checker_TB_yolov8_ver2
|
linhcuem
| 2024-01-26T02:03:46Z | 0 | 0 |
ultralytics
|
[
"ultralytics",
"tensorboard",
"v8",
"ultralyticsplus",
"yolov8",
"yolo",
"vision",
"object-detection",
"pytorch",
"model-index",
"region:us"
] |
object-detection
| 2024-01-26T02:03:40Z |
---
tags:
- ultralyticsplus
- yolov8
- ultralytics
- yolo
- vision
- object-detection
- pytorch
library_name: ultralytics
library_version: 8.0.43
inference: false
model-index:
- name: linhcuem/checker_TB_yolov8_ver2
results:
- task:
type: object-detection
metrics:
- type: precision # since [email protected] is not available on hf.co/metrics
value: 0.96786 # min: 0.0 - max: 1.0
name: [email protected](box)
---
<div align="center">
<img width="640" alt="linhcuem/checker_TB_yolov8_ver2" src="https://huggingface.co/linhcuem/checker_TB_yolov8_ver2/resolve/main/thumbnail.jpg">
</div>
### Supported Labels
```
['bom_gen', 'bom_jn', 'bom_knp', 'bom_sachet', 'bom_vtgk', 'bom_ytv', 'hop_dln', 'hop_jn', 'hop_vtg', 'hop_ytv', 'lo_kids', 'lo_ytv', 'loc_dln', 'loc_jn', 'loc_kids', 'loc_ytv', 'pocky', 'tui_gen', 'tui_jn', 'tui_sachet', 'tui_vtgk']
```
### How to use
- Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus):
```bash
pip install ultralyticsplus==0.0.28 ultralytics==8.0.43
```
- Load model and perform prediction:
```python
from ultralyticsplus import YOLO, render_result
# load model
model = YOLO('linhcuem/checker_TB_yolov8_ver2')
# set model parameters
model.overrides['conf'] = 0.25 # NMS confidence threshold
model.overrides['iou'] = 0.45 # NMS IoU threshold
model.overrides['agnostic_nms'] = False # NMS class-agnostic
model.overrides['max_det'] = 1000 # maximum number of detections per image
# set image
image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
# perform inference
results = model.predict(image)
# observe results
print(results[0].boxes)
render = render_result(model=model, image=image, result=results[0])
render.show()
```
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_2913-1e-4
|
kanishka
| 2024-01-26T01:45:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T03:08:57Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_2913-1e-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-keys_to_pipps_2913-1e-4
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3547
- Accuracy: 0.4065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0518 | 1.0 | 18629 | 4.2148 | 0.3103 |
| 3.5731 | 2.0 | 37258 | 3.7002 | 0.3628 |
| 3.3939 | 3.0 | 55887 | 3.5428 | 0.3792 |
| 3.2906 | 4.0 | 74516 | 3.4673 | 0.3886 |
| 3.2245 | 5.0 | 93145 | 3.4056 | 0.3939 |
| 3.1727 | 6.0 | 111774 | 3.4006 | 0.3969 |
| 3.1301 | 7.0 | 130403 | 3.3740 | 0.3995 |
| 3.0954 | 8.0 | 149032 | 3.3669 | 0.4005 |
| 3.0675 | 9.0 | 167661 | 3.3578 | 0.4022 |
| 3.0422 | 10.0 | 186290 | 3.3458 | 0.4028 |
| 3.011 | 11.0 | 204919 | 3.3474 | 0.4041 |
| 2.9957 | 12.0 | 223548 | 3.3452 | 0.4046 |
| 2.9729 | 13.0 | 242177 | 3.3344 | 0.4055 |
| 2.9501 | 14.0 | 260806 | 3.3348 | 0.4064 |
| 2.9303 | 15.0 | 279435 | 3.3385 | 0.4056 |
| 2.91 | 16.0 | 298064 | 3.3449 | 0.4063 |
| 2.8962 | 17.0 | 316693 | 3.3472 | 0.4060 |
| 2.8792 | 18.0 | 335322 | 3.3533 | 0.4061 |
| 2.867 | 19.0 | 353951 | 3.3526 | 0.4063 |
| 2.8451 | 20.0 | 372580 | 3.3547 | 0.4065 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|
RecCode/whisper-fine_tuning
|
RecCode
| 2024-01-26T01:38:11Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-26T01:21:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
base_model: openai/whisper-small
model-index:
- name: whisper-fine_tuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-fine_tuning
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4096
- Wer: 89.7704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 5.127 | 0.04 | 10 | 6.5305 | 89.7704 |
| 4.9407 | 0.08 | 20 | 5.6702 | 88.1002 |
| 3.9127 | 0.12 | 30 | 5.2648 | 85.1775 |
| 3.4678 | 0.16 | 40 | 5.0057 | 84.7599 |
| 3.7416 | 0.2 | 50 | 4.8397 | 85.3862 |
| 3.1575 | 0.24 | 60 | 4.6961 | 86.4301 |
| 3.3175 | 0.28 | 70 | 4.5819 | 87.2651 |
| 2.9554 | 0.32 | 80 | 4.4950 | 88.1002 |
| 3.0291 | 0.36 | 90 | 4.4375 | 89.7704 |
| 3.0219 | 0.4 | 100 | 4.4096 | 89.7704 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
GAI-LLM/Yi-Ko-6B-mixed-v15-dedup
|
GAI-LLM
| 2024-01-26T01:36:32Z | 0 | 0 |
transformers
|
[
"transformers",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T01:34:59Z |
---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/Yi-Ko-6B-mixed-v15-dedup**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/Yi-Ko-6B-mixed-v15-dedup is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy with dedup
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/Yi-Ko-6B-mixed-v15-dedup
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/Yi-Ko-6B-mixed-v15-dedup"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
|
ntc-ai/SDXL-LoRA-slider.wearing-a-suit-and-tie
|
ntc-ai
| 2024-01-26T01:27:57Z | 124 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-26T01:27:49Z |
---
language:
- en
thumbnail: "images/evaluate/wearing a suit and tie.../wearing a suit and tie_17_3.0.png"
widget:
- text: wearing a suit and tie
output:
url: images/wearing a suit and tie_17_3.0.png
- text: wearing a suit and tie
output:
url: images/wearing a suit and tie_19_3.0.png
- text: wearing a suit and tie
output:
url: images/wearing a suit and tie_20_3.0.png
- text: wearing a suit and tie
output:
url: images/wearing a suit and tie_21_3.0.png
- text: wearing a suit and tie
output:
url: images/wearing a suit and tie_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "wearing a suit and tie"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - wearing a suit and tie (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/wearing a suit and tie_17_-3.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_17_0.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_17_3.0.png" width=256 height=256 /> |
| <img src="images/wearing a suit and tie_19_-3.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_19_0.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_19_3.0.png" width=256 height=256 /> |
| <img src="images/wearing a suit and tie_20_-3.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_20_0.0.png" width=256 height=256 /> | <img src="images/wearing a suit and tie_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
wearing a suit and tie
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.wearing-a-suit-and-tie', weight_name='wearing a suit and tie.safetensors', adapter_name="wearing a suit and tie")
# Activate the LoRA
pipe.set_adapters(["wearing a suit and tie"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, wearing a suit and tie"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
motherduckdb/DuckDB-NSQL-7B-v0.1
|
motherduckdb
| 2024-01-26T00:45:52Z | 381 | 90 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"base_model:meta-llama/Llama-2-7b",
"base_model:finetune:meta-llama/Llama-2-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T19:13:56Z |
---
license: llama2
inference:
parameters:
do_sample: false
max_length: 200
base_model: meta-llama/Llama-2-7b
widget:
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "read test.csv"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question.\n\n### Input:\n\n### Question:\ncreate a new table called tmp from test.csv\n\n### Response (use duckdb shorthand if possible):"
example_title: "get _amount columns"
- text: "### Instruction:\nYour task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.\n\n### Input:\nHere is the database schema that the SQL query will run on:\nCREATE TABLE rideshare (\n hvfhs_license_num varchar,\n dispatching_base_num varchar,\n originating_base_num varchar,\n request_datetime timestamp,\n on_scene_datetime timestamp,\n pickup_datetime timestamp,\n dropoff_datetime timestamp,\n trip_miles double,\n trip_time bigint,\n\n);\n\n### Question:\nget longest trip in december 2022\n\n### Response (use duckdb shorthand if possible):"
example_title: "taxi trips"
---
# DuckDB-NSQL-7B
## Model Description
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
In this repository we are introducing a new member of NSQL, DuckDB-NSQL. It's based on Meta's original [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b) and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of DuckDB text-to-SQL pairs.
## Training Data
200k DuckDB text-to-SQL pairs, synthetically generated using [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), guided by the DuckDB v0.9.2 documentation. And text-to-SQL pairs from [NSText2SQL](https://huggingface.co/datasets/NumbersStation/NSText2SQL) that were transpiled to DuckDB SQL using [sqlglot](https://github.com/tobymao/sqlglot).
## Evaluation Data
We evaluate our models on a DuckDB-specific benchmark that contains 75 text-to-SQL pairs. The benchmark is available [here](https://github.com/NumbersStationAI/DuckDB-NSQL/).
## Training Procedure
DuckDB-NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using 80GB A100s, leveraging data and model parallelism. We fine-tuned for 10 epochs.
## Intended Use and Limitations
The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputs.
In contrast to existing text-to-SQL models, the SQL generation is not contrained to `SELECT` statements, but can generate any valid DuckDB SQL statement, including statements for official DuckDB extensions.
## How to Use
Example 1:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question.
### Input:
### Question:
create a new table called tmp from test.csv
### Response (use duckdb shorthand if possible):
"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 2:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE taxi (
VendorID bigint,
tpep_pickup_datetime timestamp,
tpep_dropoff_datetime timestamp,
passenger_count double,
trip_distance double,
fare_amount double,
extra double,
tip_amount double,
tolls_amount double,
improvement_surcharge double,
total_amount double,
);
### Question:
get all columns ending with _amount from taxi table
### Response (use duckdb shorthand if possible):"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
Example 3:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")
model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1", torch_dtype=torch.bfloat16)
text = """### Instruction:
Your task is to generate valid duckdb SQL to answer the following question, given a duckdb database schema.
### Input:
Here is the database schema that the SQL query will run on:
CREATE TABLE rideshare (
hvfhs_license_num varchar,
dispatching_base_num varchar,
originating_base_num varchar,
request_datetime timestamp,
on_scene_datetime timestamp,
pickup_datetime timestamp,
dropoff_datetime timestamp,
trip_miles double,
trip_time bigint,
);
### Question:
get longest trip in december 2022
### Response (use duckdb shorthand if possible):
"""
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=500)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/DuckDB-NSQL).
|
mhammadkhan/peft-lora-starcoderbase-1b-llamaindex-copilot
|
mhammadkhan
| 2024-01-26T00:44:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T00:39:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexandreacff/finetuning-sentiment-model-1000-samples
|
alexandreacff
| 2024-01-26T00:37:40Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-base-portuguese-cased",
"base_model:finetune:neuralmind/bert-base-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-26T00:31:00Z |
---
license: mit
base_model: neuralmind/bert-base-portuguese-cased
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-1000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-1000-samples
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6955
- eval_accuracy: 0.7067
- eval_f1: 0.6384
- eval_runtime: 3.7307
- eval_samples_per_second: 80.415
- eval_steps_per_second: 5.093
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jeiku/General_Purpose_3B_GGUF
|
jeiku
| 2024-01-26T00:25:36Z | 16 | 2 | null |
[
"gguf",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:jeiku/Everything_v3_128_StableLM",
"base_model:merge:jeiku/Everything_v3_128_StableLM",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-25T23:59:59Z |
---
base_model:
- jeiku/Theory_of_Mind_128_StableLM
- jeiku/Everything_v3_128_StableLM
- jeiku/Gnosis_StableLM
tags:
- mergekit
- merge
---
# mooby
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* new1 + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
* new1 + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
* new1 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: new1+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 1
- model: new1+jeiku/Everything_v3_128_StableLM
parameters:
weight: 1
- model: new1+jeiku/Gnosis_StableLM
parameters:
weight: 1
dtype: float16
```
|
fredhsu/fastai-dogcat
|
fredhsu
| 2024-01-25T23:53:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-25T23:52:23Z |
This is the model created following the fast.ai course as an example exported model. It was trained using fast.ai's vision learner on the Oxford pet image dataset.
|
ambrosfitz/tinyllama-history-chat-v1.5
|
ambrosfitz
| 2024-01-25T23:47:26Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T19:18:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
We took the Tinyllama formate and fine-tuned the model with a history focus.
### Model Description
This model was fined tuned using a dataset based on the opensource textbooks of the American Yawp and the OpenStax US History. Questions and answer pairs from the dataset were
generated using Claude.ai and ChatGPT 3.5.
### Updated:
v1.5 is an updated from v1.1 by training on the new dataset generated by ambrosfitz/mighty-history-merge.
https://huggingface.co/datasets/ambrosfitz/mighty-history-merge
### Python Script
I have included the python script to generate most of this data at the might-history-merge dataset. Take a look if you are needing
a way to generate text from ChatGPT or from books/material you need to generate Q&A from.
- **Developed by:** ambrosfitz
- **Model type:** llama
- **Language(s) (NLP):** English
- **License:** [More Information Needed]
- **Finetuned from model:** Tinyllama
## Uses
The purpose of this model is to facilitate a more fine-tuned and specific model for questions on history. Further version will focus on opensource history journals,
primarly from an American History persepective.
|
mii-llm/maestrale-chat-v0.1-alpha-sft
|
mii-llm
| 2024-01-25T23:38:55Z | 16 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"sft",
"it",
"chatml",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-07T11:06:55Z |
---
tags:
- sft
- it
- mistral
- chatml
model-index:
- name: maestrale-chat-v0.1-alpha
results: []
license: cc-by-nc-4.0
language:
- it
prompt_template: >-
<|im_start|>system {system_message}<|im_end|> <|im_start|>user
{prompt}<|im_end|> <|im_start|>assistant
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/55bA8IP.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat alpha ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- **Fine-Tuning**: SFT performed on ~250k Italian convs/instructions for one epoch.
This model uses ChatML prompt format:
```
<|im_start|>system
Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Usage:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
torch.backends.cuda.matmul.allow_tf32 = True
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.1-alpha")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.1-alpha", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
messages = [
{"role": "system", "content": "Assisti sempre con cura, rispetto e verità. Rispondi con la massima utilità ma in modo sicuro. Evita contenuti dannosi, non etici, pregiudizievoli o negativi. Assicurati che le risposte promuovano equità e positività."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad(), torch.backends.cuda.sdp_kernel(
enable_flash=True,
enable_math=False,
enable_mem_efficient=False
):
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
```
## Intended uses & limitations
It's an alpha version, it's not `aligned`. It's a first test. We are working on alignment data and evals.
|
microdev1/autotrain1
|
microdev1
| 2024-01-25T23:29:46Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T05:23:39Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
RinneAi/RinneVoiceSet
|
RinneAi
| 2024-01-25T23:27:27Z | 0 | 28 | null |
[
"audio",
"audio-to-audio",
"speech",
"text-to-speech",
"ja",
"license:other",
"region:us"
] |
text-to-speech
| 2023-04-18T01:32:35Z |
---
language:
- ja
tags:
- audio
- audio-to-audio
- speech
- text-to-speech
license: other
---
# Rinne ボイスセット

Please translate this page in your web browser.
とても簡単に Rinne の声で歌わせたり、しゃべらせたり、ボイスチェンジしたりできます。
<audio controls src="https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/README.mp3"></audio>
## Rinne [MYCOEIROINK](https://coeiroink.com/mycoeiroink) サンプル
**▶[「歌わせたい」で YOASOBI の『アイドル』を歌わせてみました。](https://www.youtube.com/shorts/Z9FFOyOX5vs)**
<audio controls src="https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneMycoeiroink_0.mp3"></audio>
<audio controls src="https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneMycoeiroink_1.mp3"></audio>
<audio controls src="https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneMycoeiroink_2.mp3"></audio>
# つかいかた
1. [`RinneVoiceSet.zip`](https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneVoiceSet.zip) をダウンロードして、`C:\RinneVoiceSet` などの英数字のみの浅いパスに展開します。
2. `Setup.bat` をダブルクリックして実行します。
- 「WindowsによってPCが保護されました」と表示されたら、「詳細表示」から「実行」します。
3. やりたいことにあわせて、手順を進めます。
- [Rinne を歌わせたい](./Sing.md)
- [Rinne をしゃべらせたい](./Speech.md)(日本語のみ)
- [Rinne にボイスチェンジしたい](./VoiceChange.md)
最近の NVIDIA のビデオカードを搭載した Windows10 以降で動作します。
歌う・しゃべる・ボイスチェンジのそれぞれが、10GB程度のディスク容量を消費します。
# 利用規約・免責事項
Rinne ボイスセットでは以下を利用しています。それぞれの利用規約や免責事項をご確認ください。
- 歌う: [Retrieval-based-Voice-Conversion-WebUI](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
- しゃべる: [COEIROINK](https://coeiroink.com/)
- ボイスチェンジ: [VC Client](https://github.com/w-okada/voice-changer)
- [7-Zip](https://sevenzip.osdn.jp/)
## Rinne の音声モデル
- 成果物に「Rinne」の権利表記をお願いしていましたが無くても可としました。(2024年1月)
- 必須ではありませんが上記の表記や[RinneのX(Twitter)へのリンク](https://twitter.com/llrinnell)をして頂けると嬉しいです。
### できること
- 成果物の加工
- 成果物の商用利用
- 成果物の学習素材としての利用
- RVC の歌うとボイスチェンジのみ、COEIROINK のしゃべるは不可
- 音声モデルのマージ
- R-18、R-18G表現(ただしゾーニング必須)
### できないこと
- 音声モデルの二次配布
- 人を批判・攻撃すること
- 特定の政治的立場・宗教・思想への賛同または反対を呼びかけること
- 刺激の強い表現をゾーニングなしで公開すること
- Rinne へのなりすましなど、提供者に不利益をもたらすこと
## Rinne の MYCOEIROINK(しゃべる)
上記の Rinne の音声モデルの条件に加え、[COEIROINKの利用規約](https://coeiroink.com/terms) もご確認ください。
**成果物に「COEIROINK:Rinne」の権利表記が必要になります。**
# ボイスモデルの単体ダウンロード
ボイスモデルだけを入手したい方向けです。
- [Retrieval-based-Voice-Conversion-WebUI](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) による RVC ボイスモデル
- ITAコーパス & MANAコーパス
- [2023/04/16 版で 200 Epoch](https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneRVC-20230416.zip)
- [2023/04/10 版で 100 Epoch](https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneRVC-20230410.zip)
- [MYCOEIROINK](https://coeiroink.com/mycoeiroink) ボイスモデル
- ITAコーパス & MANAコーパス、100 Epoch
- [2023/01/24 版](https://huggingface.co/RinneAi/RinneVoiceSet/resolve/main/RinneMycoeiroink-20230124.zip)
----
COEIROINK:Rinne
|
jeiku/General_Purpose_3B
|
jeiku
| 2024-01-25T23:11:58Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2203.05482",
"base_model:jeiku/Everything_v3_128_StableLM",
"base_model:merge:jeiku/Everything_v3_128_StableLM",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T22:37:16Z |
---
base_model:
- jeiku/Theory_of_Mind_128_StableLM
- jeiku/Everything_v3_128_StableLM
- jeiku/Gnosis_StableLM
tags:
- mergekit
- merge
---
# mooby
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* new1 + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
* new1 + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
* new1 + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: new1+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 1
- model: new1+jeiku/Everything_v3_128_StableLM
parameters:
weight: 1
- model: new1+jeiku/Gnosis_StableLM
parameters:
weight: 1
dtype: float16
```
|
unionai/RedPajama-INCITE-Base-3B-v1-wikipedia-8bit
|
unionai
| 2024-01-25T23:06:49Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"causal-lm",
"redpajama",
"fine-tuning",
"wikipedia",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-12T15:44:55Z |
---
datasets:
- wikipedia
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- redpajama
- fine-tuning
- wikipedia
---
# RedPajama-INCITE-Base-3B-v1 fine-tuned on wikipedia
|
stevhliu/my_awesome_eli5_clm-model
|
stevhliu
| 2024-01-25T23:03:03Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-12T21:10:52Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8283
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9484 | 1.0 | 1321 | 3.8403 |
| 3.8458 | 2.0 | 2642 | 3.8302 |
| 3.8048 | 3.0 | 3963 | 3.8283 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
varun-v-rao/t5-large-snli-model3
|
varun-v-rao
| 2024-01-25T22:45:57Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T18:24:34Z |
---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-large-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-snli-model3
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2242
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 71
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2808 | 1.0 | 4292 | 0.2214 | 0.9246 |
| 0.2491 | 2.0 | 8584 | 0.2190 | 0.9259 |
| 0.2213 | 3.0 | 12876 | 0.2242 | 0.9267 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AlekseyKorshuk/evol-codealpaca-v1-sft-4e-5
|
AlekseyKorshuk
| 2024-01-25T22:45:48Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"phi",
"text-generation",
"axolotl",
"generated_from_trainer",
"conversational",
"custom_code",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T21:15:10Z |
---
license: mit
base_model: microsoft/phi-2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: evol-codealpaca-v1-sft-4e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: microsoft/phi-2
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
hub_model_id: AlekseyKorshuk/evol-codealpaca-v1-sft-4e-5
hub_strategy: every_save
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: AlekseyKorshuk/evol-codealpaca-v1-sft
type: sharegpt
conversation: chatml
dataset_prepared_path:
val_set_size: 0
output_dir: ./output
sequence_len: 2048
sample_packing: false # currently unsupported
pad_to_sequence_len:
lora_r:
lora_alpha:
lora_dropout:
lora_target_modules:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: ui-thesis
wandb_entity:
wandb_watch:
wandb_name: phi-2-chatml-v1
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 16
num_epochs: 1
optimizer: paged_adamw_8bit
adam_beta1: 0.9
adam_beta2: 0.95
max_grad_norm: 1.0
adam_epsilon: 0.00001
lr_scheduler: cosine
cosine_min_lr_ratio: 0.1
learning_rate: 4e-5
warmup_ratio: 0.1
weight_decay: 0.1
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: true
#float16: false
#bloat16: true
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
evals_per_epoch: 0
eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0
eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128
eval_sample_packing: false
chat_template: chatml
saves_per_epoch: 1
save_total_limit: 1
seed: 42
debug:
deepspeed:
fsdp:
fsdp_config:
resize_token_embeddings_to_32x: true
special_tokens:
eos_token: "<|im_end|>"
pad_token: "<|endoftext|>"
tokens:
- "<|im_start|>"
```
</details><br>
# evol-codealpaca-v1-sft-4e-5
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 3
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Kooten/EstopianMaid-13B-6bpw-exl2
|
Kooten
| 2024-01-25T22:30:22Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"roleplay",
"text-generation-inference",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T21:06:37Z |
---
license: apache-2.0
language:
- en
tags:
- roleplay
- text-generation-inference
---
# EstopianMaid-13B 6bpw
## Description
Exllama quant of [KatyTheCutie/EstopianMaid-13B](https://huggingface.co/KatyTheCutie/EstopianMaid-13B)
## Other quants:
EXL2: [8bpw](https://huggingface.co/Kooten/EstopianMaid-13B-8bpw-exl2), [6bpw](https://huggingface.co/Kooten/EstopianMaid-13B-6bpw-exl2), [5bpw](https://huggingface.co/Kooten/EstopianMaid-13B-5bpw-exl2), [4bpw](https://huggingface.co/Kooten/EstopianMaid-13B-4bpw-exl2)
## Prompt format:
Unclear.. none were suggested and the merged models use different formats.
## Contact
Kooten on discord
|
BoloniniD/internlm2-20b-gguf
|
BoloniniD
| 2024-01-25T22:21:20Z | 5 | 2 | null |
[
"gguf",
"internlm2",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-25T19:10:53Z |
---
tags:
- internlm2
pipeline_tag: text-generation
---
GGUF quants of https://huggingface.co/chargoddard/internlm2-20b-llama using https://huggingface.co/RangiLyu/InternLM2-tokenizer-llama to fix tokenizer issues.
|
jeiku/RPGodzilla_3.43B_GGUF
|
jeiku
| 2024-01-25T22:07:01Z | 60 | 2 | null |
[
"gguf",
"mergekit",
"merge",
"arxiv:2203.05482",
"base_model:jeiku/Bluemoon_cleaned_StableLM",
"base_model:merge:jeiku/Bluemoon_cleaned_StableLM",
"base_model:jeiku/Everything_v3_128_StableLM",
"base_model:merge:jeiku/Everything_v3_128_StableLM",
"base_model:jeiku/LimaRP_StableLM",
"base_model:merge:jeiku/LimaRP_StableLM",
"base_model:jeiku/No_Robots_Alpaca_StableLM",
"base_model:merge:jeiku/No_Robots_Alpaca_StableLM",
"base_model:jeiku/PIPPA_128_StableLM",
"base_model:merge:jeiku/PIPPA_128_StableLM",
"base_model:jeiku/RPGPT_StableLM",
"base_model:merge:jeiku/RPGPT_StableLM",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"base_model:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:jeiku/Toxic_DPO_StableLM",
"base_model:merge:jeiku/Toxic_DPO_StableLM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-25T20:33:59Z |
---
base_model:
- jeiku/Bluemoon_cleaned_StableLM
- jeiku/Everything_v3_128_StableLM
- jeiku/Theory_of_Mind_128_StableLM
- jeiku/Theory_of_Mind_RP_128_StableLM
- jeiku/LimaRP_StableLM
- jeiku/No_Robots_Alpaca_StableLM
- jeiku/RPGPT_StableLM
- jeiku/Toxic_DPO_StableLM
- jeiku/PIPPA_128_StableLM
- jeiku/Gnosis_StableLM
tags:
- mergekit
- merge
---
# snek
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* snek + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
* snek + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
* snek + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
* snek + [jeiku/Theory_of_Mind_RP_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_RP_128_StableLM)
* snek + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM)
* snek + [jeiku/No_Robots_Alpaca_StableLM](https://huggingface.co/jeiku/No_Robots_Alpaca_StableLM)
* snek + [jeiku/RPGPT_StableLM](https://huggingface.co/jeiku/RPGPT_StableLM)
* snek + [jeiku/Toxic_DPO_StableLM](https://huggingface.co/jeiku/Toxic_DPO_StableLM)
* snek + [jeiku/PIPPA_128_StableLM](https://huggingface.co/jeiku/PIPPA_128_StableLM)
* snek + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: snek+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Everything_v3_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Gnosis_StableLM
parameters:
weight: 1
- model: snek+jeiku/Toxic_DPO_StableLM
parameters:
weight: 1
- model: snek+jeiku/No_Robots_Alpaca_StableLM
parameters:
weight: 1
- model: snek+jeiku/Theory_of_Mind_RP_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 1
- model: snek+jeiku/RPGPT_StableLM
parameters:
weight: 1
- model: snek+jeiku/LimaRP_StableLM
parameters:
weight: 1
- model: snek+jeiku/PIPPA_128_StableLM
parameters:
weight: 1
dtype: float16
```
|
Angelinis/outputs
|
Angelinis
| 2024-01-25T22:06:45Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:lrds-code/samba-1.1B",
"base_model:adapter:lrds-code/samba-1.1B",
"license:llama2",
"region:us"
] | null | 2024-01-25T22:06:42Z |
---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: lrds-code/samba-1.1B
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [lrds-code/samba-1.1B](https://huggingface.co/lrds-code/samba-1.1B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
maldred/Reinforce-copter
|
maldred
| 2024-01-25T21:47:46Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T21:21:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-copter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 21.30 +/- 23.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
simonycl/data-selection-Llama-2-7b-sharegpt-KMenasRandomDeita-64-0.05-lora-epoch_4
|
simonycl
| 2024-01-25T21:22:25Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-25T21:22:13Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
hpourmodheji/torch-ppo-LunarLander-v2
|
hpourmodheji
| 2024-01-25T21:14:50Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-21T04:18:35Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -152.39 +/- 57.62
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'hpourmodheji/torch-ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Luizinftr/TESTE
|
Luizinftr
| 2024-01-25T21:04:37Z | 0 | 0 | null |
[
"code",
"image-to-3d",
"br",
"dataset:LDJnr/Capybara",
"license:apache-2.0",
"region:us"
] |
image-to-3d
| 2024-01-25T21:01:34Z |
---
license: apache-2.0
datasets:
- LDJnr/Capybara
language:
- br
metrics:
- bertscore
pipeline_tag: image-to-3d
tags:
- code
---
|
zaq-hack/Orion-14B-LongChat-bpw600-h6-exl2
|
zaq-hack
| 2024-01-25T21:01:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"orion",
"text-generation",
"code",
"model",
"llm",
"custom_code",
"en",
"zh",
"ja",
"ko",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T16:58:27Z |
---
language:
- en
- zh
- ja
- ko
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- code
- model
- llm
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<img src="./assets/imgs/orion_start.PNG" alt="logo" width="50%" />
</div>
<div align="center">
<h1>
Orion-14B
</h1>
</div>
<div align="center">
<div align="center">
<b>🌐English</b> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-LongChat/blob/main/README_zh.md" target="_blank">🇨🇳中文</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-LongChat/blob/main/README_ja.md" target="_blank">🇯🇵日本語</a> | <a href="https://huggingface.co/OrionStarAI/Orion-14B-LongChat/blob/main/README_ko.md" target="_blank">🇰🇷한국어</a>
</div>
<h4 align="center">
<p>
🤗 <a href="https://huggingface.co/OrionStarAI" target="_blank">HuggingFace Mainpage</a> | 🤖 <a href="https://modelscope.cn/organization/OrionStarAI" target="_blank">ModelScope Mainpage</a><br>🎬 <a href="https://huggingface.co/spaces/OrionStarAI/Orion-14B-App-Demo" target="_blank">HuggingFace Demo</a> | 🎫 <a href="https://modelscope.cn/studios/OrionStarAI/Orion-14B-App-Demo/summary" target="_blank">ModelScope Demo</a><br>😺 <a href="https://github.com/OrionStarAI/Orion" target="_blank">GitHub</a><br>📖 <a href="https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf" target="_blank">Tech Report</a>
<p>
</h4>
</div>
# Table of Contents
- [📖 Model Introduction](#model-introduction)
- [🔗 Model Download](#model-download)
- [🔖 Model Benchmark](#model-benchmark)
- [📊 Model Inference](#model-inference)[<img src="./assets/imgs/vllm.png" alt="vllm" height="20"/>](#vllm) [<img src="./assets/imgs/llama_cpp.png" alt="llamacpp" height="20"/>](#llama-cpp)
- [📜 Declarations & License](#declarations-license)
- [🥇 Company Introduction](#company-introduction)
<a name="model-introduction"></a><br>
# 1. Model Introduction
- Orion-14B series models are open-source multilingual large language models trained from scratch by OrionStarAI. The base model is trained on 2.5T multilingual corpus, including Chinese, English, Japanese, Korean, etc, and it exhibits superior performance in these languages. For details, please refer to [tech report](https://github.com/OrionStarAI/Orion/blob/master/doc/Orion14B_v3.pdf).
- The Orion-14B series models exhibit the following features:
- Among models with 20B-parameter scale level, Orion-14B-Base model shows outstanding performance in comprehensive evaluations.
- Strong multilingual capabilities, significantly outperforming in Japanese and Korean testsets.
- The fine-tuned models demonstrate strong adaptability, excelling in human-annotated blind tests.
- The long-chat version supports extremely long texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- The quantized versions reduce model size by 70%, improve inference speed by 30%, with performance loss less than 1%.
<table style="border-collapse: collapse; width: 100%;">
<tr>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/opencompass_en.png" alt="opencompass" style="width: 100%; height: auto;">
</td>
<td style="border: none; padding: 10px; box-sizing: border-box;">
<img src="./assets/imgs/model_cap_en.png" alt="modelcap" style="width: 100%; height: auto;">
</td>
</tr>
</table>
- Orion-14B series models including:
- **Orion-14B-Base:** A multilingual large language foundational model with 14 billion parameters, pretrained on a diverse dataset of 2.5 trillion tokens.
- **Orion-14B-Chat:** A chat-model fine-tuned on a high-quality corpus aims to provide an excellence interactive experience for users in the large model community.
- **Orion-14B-LongChat:** The long-context version excels at handling extremely lengthy texts, performing exceptionally well at a token length of 200k and can support up to a maximum of 320k.
- **Orion-14B-Chat-RAG:** A chat-model fine-tuned on a custom retrieval augmented generation dataset, achieving superior performance in retrieval augmented generation tasks.
- **Orion-14B-Chat-Plugin:** A chat-model specifically tailored for plugin and function calling tasks, ideal for agent-related scenarios where the LLM acts as a plugin and function call system.
- **Orion-14B-Base-Int4:** A quantized base model utilizing 4-bit integer weights. It significantly reduces the model size by 70% and increases the inference speed by 30% while incurring a minimal performance loss of only 1%.
- **Orion-14B-Chat-Int4:** A quantized chat model utilizing 4-bit integer weights.
<a name="model-download"></a><br>
# 2. Model Download
Model release and download links are provided in the table below:
| Model Name | HuggingFace Download Links | ModelScope Download Links |
|-------------------------|-----------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------|
| ⚾Orion-14B-Base | [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base) | [Orion-14B-Base](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base/summary) |
| 😛Orion-14B-Chat | [Orion-14B-Chat](https://huggingface.co/OrionStarAI/Orion-14B-Chat) | [Orion-14B-Chat](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat/summary) |
| 📃Orion-14B-LongChat | [Orion-14B-LongChat](https://huggingface.co/OrionStarAI/Orion-14B-LongChat) | [Orion-14B-LongChat](https://modelscope.cn/models/OrionStarAI/Orion-14B-LongChat/summary) |
| 🔎Orion-14B-Chat-RAG | [Orion-14B-Chat-RAG](https://huggingface.co/OrionStarAI/Orion-14B-Chat-RAG) | [Orion-14B-Chat-RAG](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-RAG/summary) |
| 🔌Orion-14B-Chat-Plugin | [Orion-14B-Chat-Plugin](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Plugin) | [Orion-14B-Chat-Plugin](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Plugin/summary) |
| 💼Orion-14B-Base-Int4 | [Orion-14B-Base-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Base-Int4) | [Orion-14B-Base-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Base-Int4/summary) |
| 📦Orion-14B-Chat-Int4 | [Orion-14B-Chat-Int4](https://huggingface.co/OrionStarAI/Orion-14B-Chat-Int4) | [Orion-14B-Chat-Int4](https://modelscope.cn/models/OrionStarAI/Orion-14B-Chat-Int4/summary) |
<a name="model-benchmark"></a><br>
# 3. Model Benchmarks
## 3.1. Base Model Orion-14B-Base Benchmarks
### 3.1.1. LLM evaluation results on examination and professional knowledge
| Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6 |
| Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3 |
| Baichuan2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0 |
| QWEN-14B | 71.7 | 70.2 | 67.9 | 51.9 | **62.5** | 53.7 |
| InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5 |
| **Orion-14B-Base** | **72.9** | **70.6** | **69.9** | **54.7** | 62.1 | **56.5** |
### 3.1.2. LLM evaluation results on language understanding and common knowledge
| Model |RACE-middle|RACE-high |HellaSwag | PIQA | Lambada | WSC |
|--------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3 |
| Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3 |
| Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 66.3 |
| QWEN-14B | 93.0 | 90.3 | **80.2** | 79.8 | 71.4 | 66.3 |
| InternLM-20B | 86.4 | 83.3 | 78.1 | **80.3** | 71.8 | 68.3 |
| **Orion-14B-Base** | **93.2** | **91.3** | 78.5 | 79.5 | **78.8** | **70.2** |
### 3.1.3. LLM evaluation results of OpenCompass testsets
| Model | Average | Examination | Language | Knowledge | Understanding | Reasoning |
|------------------|----------|----------|----------|----------|----------|----------|
| LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6 |
| Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2 |
| Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2 |
| QWEN-14B | 62.4 | 71.3 | 52.67 | 56.1 | 68.8 | 60.1 |
| InternLM-20B | 59.4 | 62.5 | 55.0 | **60.1** | 67.3 | 54.9 |
|**Orion-14B-Base**| **64.3** | **71.4** | **55.0** | 60.0 | **71.9** | **61.6** |
### 3.1.4. Comparison of LLM performances on Japanese testsets
| Model |**Average**| JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
| WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
| ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
| StableLM-jp-7B | 51.1 | 33.4 | 43.3 | **96.7** | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
| LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
| Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
| QWEN-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
| Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | **86.1** | 78.5 | **27.2** | 69.2 | 35.2 |
| **Orion-14B-Base** | **69.1** | **88.2** | **75.8** | 94.1 | 75.7 | **85.1** | 17.3 | **78.8** | **38.0** |
### 3.1.5. Comparison of LLM performances on Korean testsets. n = 0 and n = 5 stand for n-shot prompts used in the evaluation
|Model | **Average**<br>n=0 n=5 | HellaSwag<br>n=0 n=5 | COPA<br> n=0 n=5 | BooIQ<br>n=0 n=5 | SentiNeg<br>n=0 n=5|
|------------------|------------------------------|------------------------------|------------------------------|------------------------------|------------------------------|
| KoGPT | 53.0 70.1 | 55.9 58.3 | 73.5 72.9 | 45.1 59.8 | 37.5 89.4 |
| Polyglot-ko-13B | 69.6 73.7 |**59.5** **63.1**|**79.4** **81.1**| 48.2 60.4 | 91.2 90.2 |
| LLaMA 2-13B | 46.7 63.7 | 41.3 44.0 | 59.3 63.8 | 34.9 73.8 | 51.5 73.4 |
| Baichuan 2-13B | 52.1 58.7 | 39.2 39.6 | 60.6 60.6 | 58.4 61.5 | 50.3 72.9 |
| QWEN-14B | 53.8 73.7 | 45.3 46.8 | 64.9 68.9 | 33.4 83.5 | 71.5 95.7 |
| Yi-34B | 54.2 72.1 | 44.6 44.7 | 58.0 60.6 | 65.9 90.2 | 48.3 92.9 |
|**Orion-14B-Chat**|**74.5** **79.6**| 47.0 49.6 | 77.7 79.4 |**81.6** **90.7**|**92.4** **98.7**|
### 3.1.6. Multilingual evaluation
| Model | Train Lang | Japanese | Korean | Chinese | English |
|--------------------|------------|----------|----------|----------|----------|
| PLaMo-13B | En,Jp | 52.3 | * | * | * |
| Weblab-10B | En,Jp | 50.7 | * | * | * |
| ELYZA-jp-7B | En,Jp | 48.8 | * | * | * |
| StableLM-jp-7B | En,Jp | 51.1 | * | * | * |
| KoGPT-6B | En,Ko | * | 70.1 | * | * |
| Polyglot-ko-13B | En,Ko | * | 70.7 | * | * |
| Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1 |
| Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4 |
| Llama2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3 |
| Yi-34B | Multi | 67.1 | 72.2 | 58.7 | **68.8** |
| **Orion-14B-Chat** | Multi | **69.1** | **79.5** | **67.9** | 67.3 |
## 3.2. Chat Model Orion-14B-Chat Benchmarks
### 3.2.1. Chat model subjective evaluation of MTBench
| Model | First-Turn | Second-Turn | **Average** |
|----------------------|----------|----------|----------|
| Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76 |
| Qwen-14B-Chat | 7.30 | 6.62 | 6.96 |
| Llama2-13B-Chat | 7.10 | 6.20 | 6.65 |
| InternLM-20B-Chat | 7.03 | 5.93 | 6.48 |
| **Orion-14B-Chat** | **7.68** | **7.07** | **7.37** |
\* use vllm for inference
### 3.2.2. Chat model subjective evaluation of AlignBench
| Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. |**Avg.**|
|--------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25 |
| Qwen-14B-Chat |**4.91**|**4.71**|**6.90**| 6.36 | 6.74 | 6.64 | 6.59 | 6.56 |**5.72**|
| Llama2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70 |
| InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 |**7.18**| 6.19 | 6.49 | 6.22 | 4.96 |
| **Orion-14B-Chat** | 4.00 | 4.24 | 6.18 |**6.57**| 7.16 |**7.36**|**7.16**|**6.99**| 5.51 |
\* use vllm for inference
## 3.3. LongChat Model Orion-14B-LongChat Benchmarks
### 3.3.1. LongChat evaluation of LongBench
| Model | NarrativeQA|MultiFieldQA-en|MultiFieldQA-zh| DuReader | QMSum | VCSUM | TREC | TriviaQA | LSHT |RepoBench-P|
|--------------------------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|-----------|
| GPT-3.5-Turbo-16k | **23.60** | **52.30** | **61.20** | 28.70 | 23.40 | **16.00** | 68.00 | **91.40** | 29.20 | 53.60 |
| LongChat-v1.5-7B-32k | 16.90 | 41.40 | 29.10 | 19.50 | 22.70 | 9.90 | 63.50 | 82.30 | 23.20 | 55.30 |
| Vicuna-v1.5-7B-16k | 19.40 | 38.50 | 43.00 | 19.30 | 22.80 | 15.10 | 71.50 | 86.20 | 28.80 | 43.50 |
| Yi-6B-200K | 14.11 | 36.74 | 22.68 | 14.01 | 20.44 | 8.08 | 72.00 | 86.61 | 38.00 | **63.29** |
| Orion-14B-LongChat | 19.47 | 48.11 | 55.84 | **37.02** | **24.87** | 15.44 | **77.00** | 89.12 | **45.50** | 54.31 |
## 3.4. Chat RAG Model Benchmarks
### 3.4.1. LLM evaluation results of self-built RAG testsets
|Model|Effectiveness of Response(Keyword)|*Effectiveness of Response(subjective evaluation)|Quoting Ability|Fallback Ability|*AutoQA|*Data Extraction|
|---------------------|------|------|------|------|------|------|
| Baichuan2-13B-Chat | 85 | 76 | 1 | 0 | 69 | 51 |
| Qwen-14B-Chat | 79 | 77 | 75 | 47 | 68 | 72 |
| Qwen-72B-Chat(Int4) | 87 | 89 | 90 | 32 | 67 | 76 |
| GPT-4 | 91 | 94 | 96 | 95 | 75 | 86 |
| Orion-14B-Chat-RAG | 86 | 87 | 91 | 97 | 73 | 71 |
\* means manual assessment
## 3.5. Chat Plugin Model Orion-14B-Chat-Plugin Benchmarks
### 3.5.1. LLM evaluation results of self-built plugin testsets
|Model |Intent Recognition with Full Params |Intent Recognition with Missing Params |Non-Plugin Invocation Recognition |
|-----------------------|--------|-----------|--------|
| Baichuan2-13B-Chat | 25 | 0 | 0 |
| Qwen-14B-Chat | 55 | 0 | 50 |
| GPT-4 | **95** | 52.38 | 70 |
| Orion-14B-Chat-Plugin | 92.5 | **60.32** | **90** |
## 3.6. Quantized Model Orion-14B-Base-Int4 Benchmarks
### 3.6.1. Comparison of before and after quantization
|Model |Size(GB)|Inference Speed(tokens/s)|C-Eval|CMMLU|MMLU|RACE|HellaSwag|
|-------------------------|-------|-----|------|------|------|------|------|
| OrionStar-14B-Base | 28.0 | 135 | 72.8 | 70.6 | 70.0 | 93.3 | 78.5 |
| OrionStar-14B-Base-Int4 | 8.3 | 178 | 71.8 | 69.8 | 69.2 | 93.1 | 78.0 |
<a name="model-inference"></a><br>
# 4. Model Inference
Model weights, source code, and configuration needed for inference are published on Hugging Face, and the download link
is available in the table at the beginning of this document. We demonstrate various inference methods here, and the
program will automatically download the necessary resources from Hugging Face.
## 4.1. Python Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("OrionStarAI/Orion-14B", use_fast=False, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("OrionStarAI/Orion-14B", device_map="auto",
torch_dtype=torch.bfloat16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("OrionStarAI/Orion-14B")
messages = [{"role": "user", "content": "Hello, what is your name? "}]
response = model.chat(tokenizer, messages, streaming=False)
print(response)
```
In the above Python code, the model is loaded with `device_map='auto'` to utilize all available GPUs. To specify the
device, you can use something like `export CUDA_VISIBLE_DEVICES=0,1` (using GPUs 0 and 1).
## 4.2. Command Line Tool
```shell
CUDA_VISIBLE_DEVICES=0 python cli_demo.py
```
This command-line tool is designed for chat scenarios, and thus, it does not support calling the base model.
## 4.3. Direct Script Inference
```shell
# base model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation_base.py --model OrionStarAI/Orion-14B --tokenizer OrionStarAI/Orion-14B --prompt hello
# chat model
CUDA_VISIBLE_DEVICES=0 python demo/text_generation.py --model OrionStarAI/Orion-14B-Chat --tokenizer OrionStarAI/Orion-14B-Chat --prompt hi
```
<a name="vllm"></a><br>
## 4.4. Inference by vllm
- Project URL<br>
https://github.com/vllm-project/vllm
- Pull Request<br>
https://github.com/vllm-project/vllm/pull/2539
<a name="llama-cpp"></a><br>
## 4.5. Inference by llama.cpp
- Project URL<br>
https://github.com/ggerganov/llama.cpp
- Pull Request<br>
https://github.com/ggerganov/llama.cpp/pull/5118
- How to convert to GGUF model
```shell
python convert-hf-to-gguf.py path/to/Orion-14B-Chat --outfile chat.gguf
```
- How to run generation
```shell
./main --frequency-penalty 0.5 --frequency-penalty 0.5 --top-k 5 --top-p 0.9 -m chat.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 400 -e
```
## 4.6 Example Output
### 4.6.1. Casual Chat
`````
User: Hello
Orion-14B: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.
User: What functions do you have?
Orion-14B: As an AI language model, I have a wide range of functions and capabilities, including but not limited to:
Answering general knowledge questions: I can provide information on various topics such as history, science, geography, and more.
Providing definitions and explanations: I can explain complex concepts and provide definitions for words and phrases.
Assisting with tasks: I can help with simple tasks like setting reminders, converting units of measurement, and even ordering food from restaurants.
Generating text: I can write short stories, essays, articles, and other types of content based on prompts given to me.
Translating languages: I can translate text from one language to another.
Chatting: I can engage in conversation with users and respond to their queries in a conversational manner.
Summarizing long texts: I can summarize longer texts like articles, books, and research papers.
Generating random numbers and names: I can generate random numbers within a specified range and also suggest names based on certain criteria.
Playing games: I can play simple games like Hangman, Trivia, and more.
Providing recommendations: I can recommend movies, music, books, and other forms of entertainment based on user preferences.
These are just some of the many functions that I am capable of performing. If you have a specific task in mind, feel free to ask me about it.
User: Tell me a joke.
Orion-14B: Sure, here's a classic one-liner: Why don't scientists trust atoms? Because they make up everything.
`````
### 4.6.2. Japanese & Korean Chat
`````
User:自己を紹介してください
Orion-14B:こんにちは、私の名前はChatMaxで、OrionStarによって開発されたAIアシスタントです。私はユーザーに正確で、客観的で、有用な回答を提供し、彼らが問題を解決し、情報を取得するのを助けることができます。
User:자기소개를 해주세요.
Orion-14B:안녕하세요, 제 이름은 ChatMax입니다. OrionStar에서 개발한 AI 어시스턴트입니다. 저는 사용자들에게 정확하고 객관적이며 유용한 답변을 제공하여 문제를 해결하고 정보를 얻는 데 도움을 줄 수 있습니다.
`````
<a name="declarations-license"></a><br>
# 5. Declarations, License
## 5.1. Declarations
We strongly urge all users not to use the Orion-14B model for any activities that may harm national or social security or violate the law.
Additionally, we request users not to use the Orion-14B model for internet services without proper security review and filing.
We hope all users abide by this principle to ensure that technological development takes place in a regulated and legal environment.
We have done our best to ensure the compliance of the data used in the model training process. However, despite our
significant efforts, unforeseen issues may still arise due to the complexity of the model and data. Therefore, if any
problems arise due to the use of the Orion-14B open-source model, including but not limited to data security
issues, public opinion risks, or any risks and issues arising from the model being misled, abused, disseminated, or
improperly utilized, we will not assume any responsibility.
## 5.2. License
Community use of the Orion-14B series models
- For code, please comply with [Apache License Version 2.0](./LICENSE)<br>
- For model, please comply with [【Orion-14B Series】 Models Community License Agreement](./ModelsCommunityLicenseAgreement)
<a name="company-introduction"></a><br>
# 6. Company Introduction
OrionStar is a leading global service robot solutions company, founded in September 2016. OrionStar is dedicated to
using artificial intelligence technology to create the next generation of revolutionary robots, allowing people to break
free from repetitive physical labor and making human work and life more intelligent and enjoyable. Through technology,
OrionStar aims to make society and the world a better place.
OrionStar possesses fully self-developed end-to-end artificial intelligence technologies, such as voice interaction and
visual navigation. It integrates product development capabilities and technological application capabilities. Based on
the Orion robotic arm platform, it has launched products such as OrionStar AI Robot Greeting, AI Robot Greeting Mini,
Lucki, Coffee Master, and established the open platform OrionOS for Orion robots. Following the philosophy of "Born for
Truly Useful Robots", OrionStar empowers more people through AI technology.
**The core strengths of OrionStar lies in possessing end-to-end AI application capabilities,** including big data preprocessing, large model pretraining, fine-tuning, prompt engineering, agent, etc. With comprehensive end-to-end model training capabilities, including systematic data processing workflows and the parallel model training capability of hundreds of GPUs, it has been successfully applied in various industry scenarios such as government affairs, cloud services, international e-commerce, and fast-moving consumer goods.
Companies with demands for deploying large-scale model applications are welcome to contact us.<br>
**Enquiry Hotline: 400-898-7779**<br>
**E-mail: [email protected]**
<div align="center">
<img src="./assets/imgs/wechat_group.jpg" alt="wechat" width="40%" />
</div>
|
GLaDOSIsHere/Under_testing_models
|
GLaDOSIsHere
| 2024-01-25T21:01:11Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-01-25T20:49:08Z |
---
license: openrail
---
This "new model" is for Voice Models that need to be tested, and trained depending on the situation... Or just abandon them for months and come back.
|
FurongZou/distilbert-base-uncased-finetuned-ner
|
FurongZou
| 2024-01-25T20:51:34Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-25T20:14:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0841
- eval_precision: 0.8833
- eval_recall: 0.9042
- eval_f1: 0.8936
- eval_accuracy: 0.9762
- eval_runtime: 115.4849
- eval_samples_per_second: 28.142
- eval_steps_per_second: 1.766
- epoch: 0.57
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.37.1
- Pytorch 2.0.0+cpu
- Datasets 2.16.1
- Tokenizers 0.15.1
|
melll-uff/pt-br_simcse
|
melll-uff
| 2024-01-25T20:41:38Z | 44 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"dataset:loremipsum3658/sick-br",
"dataset:assin",
"dataset:assin2",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-25T20:34:43Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- loremipsum3658/sick-br
- assin
- assin2
license: apache-2.0
language:
- pt
library_name: sentence-transformers
---
# Pt-br SimCSE
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers): https://www.sbert.net/examples/unsupervised_learning/SimCSE/README.html
|
ptoro/EvolCodeLlama-7b
|
ptoro
| 2024-01-25T20:38:16Z | 9 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-01-25T19:16:14Z |
---
license: llama2
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: EvolCodeLlama-7b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: codellama/CodeLlama-7b-hf
base_model_config: codellama/CodeLlama-7b-hf
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
hub_model_id: EvolCodeLlama-7b
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: ptoro/Evol-Instruct-Python-1k-testing
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.02
output_dir: ./qlora-out
adapter: qlora
lora_model_dir:
sequence_len: 2048
sample_packing: true
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_modules:
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: axolotl
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 100
eval_steps: 0.01
save_strategy: epoch
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
</details><br>
# EvolCodeLlama-7b
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3627 | 0.01 | 1 | 0.5027 |
| 0.3412 | 0.03 | 4 | 0.5026 |
| 0.3806 | 0.07 | 8 | 0.5023 |
| 0.392 | 0.1 | 12 | 0.5018 |
| 0.4141 | 0.14 | 16 | 0.4999 |
| 0.3433 | 0.17 | 20 | 0.4954 |
| 0.3702 | 0.21 | 24 | 0.4851 |
| 0.2948 | 0.24 | 28 | 0.4682 |
| 0.3387 | 0.28 | 32 | 0.4499 |
| 0.2437 | 0.31 | 36 | 0.4331 |
| 0.2526 | 0.35 | 40 | 0.4221 |
| 0.2721 | 0.38 | 44 | 0.4146 |
| 0.2292 | 0.42 | 48 | 0.4089 |
| 0.1986 | 0.45 | 52 | 0.4028 |
| 0.3258 | 0.48 | 56 | 0.3983 |
| 0.3509 | 0.52 | 60 | 0.3950 |
| 0.2697 | 0.55 | 64 | 0.3926 |
| 0.2646 | 0.59 | 68 | 0.3907 |
| 0.3979 | 0.62 | 72 | 0.3900 |
| 0.2737 | 0.66 | 76 | 0.3880 |
| 0.2271 | 0.69 | 80 | 0.3865 |
| 0.247 | 0.73 | 84 | 0.3847 |
| 0.3112 | 0.76 | 88 | 0.3824 |
| 0.2724 | 0.8 | 92 | 0.3820 |
| 0.207 | 0.83 | 96 | 0.3814 |
| 0.3492 | 0.87 | 100 | 0.3810 |
| 0.2474 | 0.9 | 104 | 0.3802 |
| 0.4037 | 0.94 | 108 | 0.3785 |
| 0.2295 | 0.97 | 112 | 0.3773 |
| 0.2689 | 1.0 | 116 | 0.3760 |
| 0.2546 | 1.02 | 120 | 0.3753 |
| 0.1916 | 1.05 | 124 | 0.3768 |
| 0.2458 | 1.09 | 128 | 0.3758 |
| 0.2155 | 1.12 | 132 | 0.3768 |
| 0.2341 | 1.16 | 136 | 0.3773 |
| 0.1909 | 1.19 | 140 | 0.3793 |
| 0.1911 | 1.23 | 144 | 0.3759 |
| 0.2096 | 1.26 | 148 | 0.3761 |
| 0.2353 | 1.29 | 152 | 0.3772 |
| 0.2606 | 1.33 | 156 | 0.3773 |
| 0.1485 | 1.36 | 160 | 0.3778 |
| 0.1807 | 1.4 | 164 | 0.3749 |
| 0.2294 | 1.43 | 168 | 0.3770 |
| 0.216 | 1.47 | 172 | 0.3759 |
| 0.1791 | 1.5 | 176 | 0.3727 |
| 0.2605 | 1.54 | 180 | 0.3733 |
| 0.2838 | 1.57 | 184 | 0.3738 |
| 0.2632 | 1.61 | 188 | 0.3694 |
| 0.1839 | 1.64 | 192 | 0.3686 |
| 0.1939 | 1.68 | 196 | 0.3690 |
| 0.2413 | 1.71 | 200 | 0.3699 |
| 0.1494 | 1.74 | 204 | 0.3689 |
| 0.2782 | 1.78 | 208 | 0.3695 |
| 0.2314 | 1.81 | 212 | 0.3696 |
| 0.2499 | 1.85 | 216 | 0.3691 |
| 0.1976 | 1.88 | 220 | 0.3672 |
| 0.2587 | 1.92 | 224 | 0.3660 |
| 0.2598 | 1.95 | 228 | 0.3658 |
| 0.2686 | 1.99 | 232 | 0.3666 |
| 0.216 | 2.01 | 236 | 0.3673 |
| 0.1261 | 2.04 | 240 | 0.3723 |
| 0.1938 | 2.08 | 244 | 0.3811 |
| 0.1906 | 2.11 | 248 | 0.3869 |
| 0.1375 | 2.15 | 252 | 0.3829 |
| 0.228 | 2.18 | 256 | 0.3796 |
| 0.2524 | 2.22 | 260 | 0.3789 |
| 0.118 | 2.25 | 264 | 0.3809 |
| 0.2224 | 2.29 | 268 | 0.3834 |
| 0.1477 | 2.32 | 272 | 0.3847 |
| 0.2095 | 2.35 | 276 | 0.3849 |
| 0.1919 | 2.39 | 280 | 0.3820 |
| 0.1916 | 2.42 | 284 | 0.3804 |
| 0.1625 | 2.46 | 288 | 0.3788 |
| 0.2054 | 2.49 | 292 | 0.3794 |
| 0.1605 | 2.53 | 296 | 0.3810 |
| 0.1564 | 2.56 | 300 | 0.3819 |
| 0.196 | 2.6 | 304 | 0.3822 |
| 0.1975 | 2.63 | 308 | 0.3830 |
| 0.1406 | 2.67 | 312 | 0.3833 |
| 0.2754 | 2.7 | 316 | 0.3830 |
| 0.1544 | 2.74 | 320 | 0.3829 |
| 0.1733 | 2.77 | 324 | 0.3830 |
| 0.1862 | 2.81 | 328 | 0.3832 |
| 0.1634 | 2.84 | 332 | 0.3829 |
| 0.1966 | 2.87 | 336 | 0.3830 |
| 0.1306 | 2.91 | 340 | 0.3831 |
| 0.1444 | 2.94 | 344 | 0.3828 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
swapandeep39/dogbooth
|
swapandeep39
| 2024-01-25T20:28:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-25T19:10:03Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - swapandeep39/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
cezeozue/distilbert-base-uncased-finetuned-clinc
|
cezeozue
| 2024-01-25T20:23:25Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T20:04:17Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7730
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2776 | 0.7287 |
| 3.7835 | 2.0 | 636 | 1.8647 | 0.8358 |
| 3.7835 | 3.0 | 954 | 1.1524 | 0.8977 |
| 1.6878 | 4.0 | 1272 | 0.8547 | 0.9129 |
| 0.8994 | 5.0 | 1590 | 0.7730 | 0.9161 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jeiku/RPGodzilla_3.43B
|
jeiku
| 2024-01-25T20:16:27Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2203.05482",
"base_model:jeiku/Bluemoon_cleaned_StableLM",
"base_model:merge:jeiku/Bluemoon_cleaned_StableLM",
"base_model:jeiku/Everything_v3_128_StableLM",
"base_model:merge:jeiku/Everything_v3_128_StableLM",
"base_model:jeiku/LimaRP_StableLM",
"base_model:merge:jeiku/LimaRP_StableLM",
"base_model:jeiku/No_Robots_Alpaca_StableLM",
"base_model:merge:jeiku/No_Robots_Alpaca_StableLM",
"base_model:jeiku/PIPPA_128_StableLM",
"base_model:merge:jeiku/PIPPA_128_StableLM",
"base_model:jeiku/RPGPT_StableLM",
"base_model:merge:jeiku/RPGPT_StableLM",
"base_model:jeiku/Theory_of_Mind_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_128_StableLM",
"base_model:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:merge:jeiku/Theory_of_Mind_RP_128_StableLM",
"base_model:jeiku/Toxic_DPO_StableLM",
"base_model:merge:jeiku/Toxic_DPO_StableLM",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T20:04:11Z |
---
base_model:
- jeiku/Bluemoon_cleaned_StableLM
- jeiku/Everything_v3_128_StableLM
- jeiku/Theory_of_Mind_128_StableLM
- jeiku/Theory_of_Mind_RP_128_StableLM
- jeiku/LimaRP_StableLM
- jeiku/No_Robots_Alpaca_StableLM
- jeiku/RPGPT_StableLM
- jeiku/Toxic_DPO_StableLM
- jeiku/PIPPA_128_StableLM
- jeiku/Gnosis_StableLM
tags:
- mergekit
- merge
---
# snek
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* snek + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
* snek + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
* snek + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
* snek + [jeiku/Theory_of_Mind_RP_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_RP_128_StableLM)
* snek + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM)
* snek + [jeiku/No_Robots_Alpaca_StableLM](https://huggingface.co/jeiku/No_Robots_Alpaca_StableLM)
* snek + [jeiku/RPGPT_StableLM](https://huggingface.co/jeiku/RPGPT_StableLM)
* snek + [jeiku/Toxic_DPO_StableLM](https://huggingface.co/jeiku/Toxic_DPO_StableLM)
* snek + [jeiku/PIPPA_128_StableLM](https://huggingface.co/jeiku/PIPPA_128_StableLM)
* snek + [jeiku/Gnosis_StableLM](https://huggingface.co/jeiku/Gnosis_StableLM)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: snek+jeiku/Theory_of_Mind_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Everything_v3_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Gnosis_StableLM
parameters:
weight: 1
- model: snek+jeiku/Toxic_DPO_StableLM
parameters:
weight: 1
- model: snek+jeiku/No_Robots_Alpaca_StableLM
parameters:
weight: 1
- model: snek+jeiku/Theory_of_Mind_RP_128_StableLM
parameters:
weight: 1
- model: snek+jeiku/Bluemoon_cleaned_StableLM
parameters:
weight: 1
- model: snek+jeiku/RPGPT_StableLM
parameters:
weight: 1
- model: snek+jeiku/LimaRP_StableLM
parameters:
weight: 1
- model: snek+jeiku/PIPPA_128_StableLM
parameters:
weight: 1
dtype: float16
```
|
hpourmodheji/ppo-LunarLander-v2
|
hpourmodheji
| 2024-01-25T20:14:14Z | 3 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2023-11-16T20:18:23Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -151.61 +/- 91.12
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'gym_torch_ppo_demo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'hpourmodheji/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
moreover18/vit-part1-friends2
|
moreover18
| 2024-01-25T20:09:03Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:moreover18/vit-part1-friends",
"base_model:finetune:moreover18/vit-part1-friends",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T19:54:09Z |
---
license: apache-2.0
base_model: moreover18/vit-part1-friends
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-part1-friends2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7395348837209302
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-part1-friends2
This model is a fine-tuned version of [moreover18/vit-part1-friends](https://huggingface.co/moreover18/vit-part1-friends) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7298
- Accuracy: 0.7395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1497 | 7.41 | 50 | 0.7298 | 0.7395 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.1
|
sessex/mm-tabi-whitebg_LoRA
|
sessex
| 2024-01-25T20:04:43Z | 203 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-25T19:23:39Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a product photo of TOK shoe'
output:
url: 'images/basic.png'
- text: 'a product photo of TOK shoe, glittery cowboy boot, front view'
output:
url: 'images/glitterboot.png'
- text: 'a photo of TOK shoe underwater, waves in background'
output:
url: 'images/underwater.png'
- text: 'a product photo of TOK shoe, pink floral background, closeup'
output:
url: 'images/pinkcloseup.png'
- text: 'a product photo of TOK shoe, pink floral background, closeup, surrealism'
output:
url: 'images/pinksurrealism.png'
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a product photo of TOK shoe
license: openrail++
---
# SDXL LoRA DreamBooth - sessex/mm-tabi-whitebg_LoRA
<Gallery />
## Model description
These are sessex/mm-tabi-whitebg_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a product photo of TOK shoe to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](sessex/tabi_LoRA/tree/main) them in the Files & versions tab.
|
jaeyoungk/komt-mistral-7b-v1-fin
|
jaeyoungk
| 2024-01-25T19:41:46Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:davidkim205/komt-mistral-7b-v1",
"base_model:adapter:davidkim205/komt-mistral-7b-v1",
"region:us"
] | null | 2024-01-25T19:17:03Z |
---
library_name: peft
base_model: davidkim205/komt-mistral-7b-v1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
daochf/Lora-Mistral-7B-Instruct-v0_2-QandA_2g_v01-r3-v04
|
daochf
| 2024-01-25T19:38:30Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-25T19:38:12Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
asun17904/imdb-t5-base-a2b2
|
asun17904
| 2024-01-25T19:32:09Z | 1 | 0 |
pytorch
|
[
"pytorch",
"t5",
"en",
"license:mit",
"region:us"
] | null | 2024-01-25T06:26:05Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 8
- `gradient_accumulation_steps` = 2
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.02
- `alpha` = 2.0
- `beta` = 2.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|11.926|0.939|1.0|
|11.887|0.941|2.0|
|11.913|0.941|3.0|
|11.912|0.940|4.0|
|11.878|0.942|5.0|
|11.784|0.945|6.0|
|11.718|0.947|7.0|
|11.742|0.946|8.0|
|11.763|0.945|9.0|
|11.736|0.946|10.0|
|11.728|0.946|11.0|
|11.790|0.944|12.0|
|11.811|0.944|13.0|
|11.784|0.945|14.0|
|11.894|0.941|15.0|
|11.739|0.946|16.0|
|11.722|0.947|17.0|
|11.959|0.940|18.0|
|11.813|0.944|19.0|
|11.733|0.946|20.0|
|11.687|0.948|21.0|
|11.707|0.947|22.0|
|11.700|0.947|23.0|
|11.691|0.948|24.0|
|11.827|0.943|25.0|
|11.720|0.947|26.0|
|11.698|0.947|27.0|
|11.692|0.948|28.0|
|11.707|0.947|29.0|
|11.706|0.947|30.0|
|11.711|0.947|31.0|
|11.721|0.947|32.0|
|11.687|0.948|33.0|
|11.728|0.946|34.0|
|11.678|0.948|35.0|
|11.691|0.948|36.0|
|11.692|0.948|37.0|
|11.684|0.948|38.0|
|11.743|0.946|39.0|
|11.631|0.950|40.0|
|11.707|0.947|41.0|
|11.652|0.949|42.0|
|11.688|0.948|43.0|
|11.647|0.949|44.0|
|11.639|0.949|45.0|
|11.660|0.949|46.0|
|11.652|0.949|47.0|
|11.648|0.949|48.0|
|11.652|0.949|49.0|
|
alirzb/SeizureClassifier_Wav2Vec_43243498
|
alirzb
| 2024-01-25T19:27:54Z | 146 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-25T18:44:33Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SeizureClassifier_Wav2Vec_43243498
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SeizureClassifier_Wav2Vec_43243498
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0520
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1121 | 0.99 | 44 | 0.9842 | 0.8144 |
| 0.6303 | 1.99 | 88 | 0.5874 | 0.8861 |
| 0.4605 | 2.98 | 132 | 0.3826 | 0.9406 |
| 0.323 | 4.0 | 177 | 0.2791 | 0.9530 |
| 0.2435 | 4.99 | 221 | 0.3828 | 0.8688 |
| 0.2354 | 5.99 | 265 | 0.1321 | 0.9752 |
| 0.2491 | 6.98 | 309 | 0.1552 | 0.9653 |
| 0.1116 | 8.0 | 354 | 0.1540 | 0.9579 |
| 0.0934 | 8.99 | 398 | 0.1053 | 0.9827 |
| 0.0774 | 9.99 | 442 | 0.1016 | 0.9777 |
| 0.0553 | 10.98 | 486 | 0.1856 | 0.9530 |
| 0.0368 | 12.0 | 531 | 0.1151 | 0.9728 |
| 0.017 | 12.99 | 575 | 0.0516 | 0.9876 |
| 0.0153 | 13.99 | 619 | 0.0540 | 0.9901 |
| 0.0144 | 14.92 | 660 | 0.0520 | 0.9901 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jondurbin/cinetrainer-34b
|
jondurbin
| 2024-01-25T19:25:45Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2024-01-21T09:29:17Z |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
---
This is an almost entirely useless model for most people. It's an intermediate fine-tune for cinematika-v0.2 dataset creation - a fine-tune of yi-34b-200k.
|
BhoneMyintSwe/Handwritten-classification
|
BhoneMyintSwe
| 2024-01-25T19:23:23Z | 0 | 0 |
keras
|
[
"keras",
"code",
"image-classification",
"en",
"license:mit",
"region:us"
] |
image-classification
| 2024-01-25T19:13:42Z |
---
license: mit
language:
- en
metrics:
- accuracy
library_name: keras
pipeline_tag: image-classification
tags:
- code
---
|
xuykin/va-er
|
xuykin
| 2024-01-25T19:13:16Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-25T19:10:01Z |
---
license: creativeml-openrail-m
---
|
LoneStriker/Umbra-v2.1-MoE-4x10.7-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-25T19:07:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"vicgalle/CarbonBeagle-11B",
"Sao10K/Fimbulvetr-10.7B-v1",
"bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED",
"Yhyu13/LMCocktail-10.7B-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T18:57:42Z |
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- vicgalle/CarbonBeagle-11B
- Sao10K/Fimbulvetr-10.7B-v1
- bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
- Yhyu13/LMCocktail-10.7B-v1
---

# Umbra-v2.1-MoE-4x10.7
The [Umbra Series] is an offshoot of the [Lumosia Series] With the goal to be a General assistant that has a knack for story telling and RP/ERP
-What's New in v2.1?
Umbra v2.1 isn't just a simple update; it's like giving the model a double shot of espresso. Ive changed the models and prompts, in an attempt to make Umbra
not only your go-to assistant for general knowledge but also a great storyteller and RP/ERP companion.
-Longer Positive, Shorter Negative
In an effort to trick the gates into being less uptight, Ive added more positive prompts and snappier negative ones.
These changes are based on the model's strengths and, frankly, my whimsical preferences.
-Experimental, As Always
Remember, folks, "v2.1" doesn't mean it's superior to its predecessors – it's just another step in the quest.
It's the 'Empire Strikes Back' of our series – could be better, could be worse, but definitely more dramatic.
-Base Context and Coherence
Umbra v2.1 has a base context of 8k scrolling window.
-The Tavern Card
Just for fun - the Umbra Personality Tavern Card. It's your gateway to immersive storytelling experiences,
a little like having a 'Choose Your Own Adventure' book, but way cooler because it's digital and doesn't get lost under your bed.
-Token Error? Fixed!
Umbra-v2 had a tokenizer error but was removed faster than you can say "Cops love Donuts"
So, give Umbra v2.1 a whirl and let me know how it goes. Your feedback is like the secret sauce in my development burger.
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg: 73.59
* ARC: 69.11
* HellaSwag: 87.57
* MMLU: 66.48
* T-QA: 66.75
* Winogrande: 83.11
* GSM8K: 68.69
## Examples:
```
posted soon
```
```
posted soon
```
## 🧩 Configuration
```
base_model: vicgalle/CarbonBeagle-11B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: vicgalle/CarbonBeagle-11B
positive_prompts: [Revamped]
- source_model: Sao10K/Fimbulvetr-10.7B-v1
positive_prompts: [Revamped]
- source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
positive_prompts: [Revamped]
- source_model: Yhyu13/LMCocktail-10.7B-v1
positive_prompts: [Revamed]
```
```
Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
* [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
* [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
* [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Umbra-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
xuykin/neg
|
xuykin
| 2024-01-25T19:07:30Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-25T19:04:45Z |
---
license: creativeml-openrail-m
---
|
thethatone/Andrija
|
thethatone
| 2024-01-25T19:05:15Z | 0 | 0 |
asteroid
|
[
"asteroid",
"text-generation",
"en",
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-25T19:04:03Z |
---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
language:
- en
metrics:
- character
library_name: asteroid
pipeline_tag: text-generation
---
|
xuykin/anke
|
xuykin
| 2024-01-25T19:00:50Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-25T18:59:33Z |
---
license: creativeml-openrail-m
---
|
Rootreck/so-vits-svc-4.0-Scrap_Mechanic
|
Rootreck
| 2024-01-25T18:58:04Z | 0 | 0 | null |
[
"Scrap Mechanic Survival",
"Scrap Mechanic",
"en",
"region:us"
] | null | 2023-12-28T06:42:58Z |
---
language:
- en
tags:
- Scrap Mechanic Survival
- Scrap Mechanic
---
|
achimvp/Reinforce-CartPole-v1
|
achimvp
| 2024-01-25T18:53:11Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T18:52:58Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/Umbra-v2.1-MoE-4x10.7-3.5bpw-h6-exl2
|
LoneStriker
| 2024-01-25T18:49:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"vicgalle/CarbonBeagle-11B",
"Sao10K/Fimbulvetr-10.7B-v1",
"bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED",
"Yhyu13/LMCocktail-10.7B-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T18:35:30Z |
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- vicgalle/CarbonBeagle-11B
- Sao10K/Fimbulvetr-10.7B-v1
- bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
- Yhyu13/LMCocktail-10.7B-v1
---

# Umbra-v2.1-MoE-4x10.7
The [Umbra Series] is an offshoot of the [Lumosia Series] With the goal to be a General assistant that has a knack for story telling and RP/ERP
-What's New in v2.1?
Umbra v2.1 isn't just a simple update; it's like giving the model a double shot of espresso. Ive changed the models and prompts, in an attempt to make Umbra
not only your go-to assistant for general knowledge but also a great storyteller and RP/ERP companion.
-Longer Positive, Shorter Negative
In an effort to trick the gates into being less uptight, Ive added more positive prompts and snappier negative ones.
These changes are based on the model's strengths and, frankly, my whimsical preferences.
-Experimental, As Always
Remember, folks, "v2.1" doesn't mean it's superior to its predecessors – it's just another step in the quest.
It's the 'Empire Strikes Back' of our series – could be better, could be worse, but definitely more dramatic.
-Base Context and Coherence
Umbra v2.1 has a base context of 8k scrolling window.
-The Tavern Card
Just for fun - the Umbra Personality Tavern Card. It's your gateway to immersive storytelling experiences,
a little like having a 'Choose Your Own Adventure' book, but way cooler because it's digital and doesn't get lost under your bed.
-Token Error? Fixed!
Umbra-v2 had a tokenizer error but was removed faster than you can say "Cops love Donuts"
So, give Umbra v2.1 a whirl and let me know how it goes. Your feedback is like the secret sauce in my development burger.
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg: 73.59
* ARC: 69.11
* HellaSwag: 87.57
* MMLU: 66.48
* T-QA: 66.75
* Winogrande: 83.11
* GSM8K: 68.69
## Examples:
```
posted soon
```
```
posted soon
```
## 🧩 Configuration
```
base_model: vicgalle/CarbonBeagle-11B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: vicgalle/CarbonBeagle-11B
positive_prompts: [Revamped]
- source_model: Sao10K/Fimbulvetr-10.7B-v1
positive_prompts: [Revamped]
- source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
positive_prompts: [Revamped]
- source_model: Yhyu13/LMCocktail-10.7B-v1
positive_prompts: [Revamed]
```
```
Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
* [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
* [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
* [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Umbra-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
masayakirishima/3d-model
|
masayakirishima
| 2024-01-25T18:49:20Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-25T17:04:05Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: None
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - masayakirishima/rikuto
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
DeepFunxtion/rack-jobber
|
DeepFunxtion
| 2024-01-25T18:46:16Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-audio",
"stable-diffusion",
"lora",
"template:sd-lora",
"license:apache-2.0",
"region:us"
] |
text-to-audio
| 2024-01-25T18:39:39Z |
---
tags:
- text-to-audio
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: Good Vibes Only
parameters:
negative_prompt: That Part
output:
url: images/ReadyPlayerMe-Avatar.png
base_model: music-gen/distro
instance_prompt: Mexico, Afro-American
license: apache-2.0
---
# rack jobber
<Gallery />
## Model description
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6563e3248fb38d71f7886ba9/TC5LnRXvjAlAfbdTLRq09.mpga"></audio>

## Trigger words
You should use `Mexico` to trigger the image generation.
You should use `Afro-American` to trigger the image generation.
## Download model
[Download](/DeepFunxtion/rack-jobber/tree/main) them in the Files & versions tab.
|
dvilasuero/DistilabeledHermes-2.5-Mistral-7B
|
dvilasuero
| 2024-01-25T18:46:05Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T18:42:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeevana/mistral_7b_group8QnA
|
jeevana
| 2024-01-25T18:45:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"en",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T18:29:17Z |
---
language:
- en
library_name: transformers
---
Run summary:
train/epoch 1.0
train/global_step 141
train/learning_rate 0.0002
train/loss 0.9442
train/total_flos 2342567262191616.0
train/train_loss 0.94424
train/train_runtime 894.5304
train/train_samples_per_second 0.63
train/train_steps_per_second 0.158
[141/141 14:37, Epoch 1/1]
Step Training Loss
25 1.142500
50 1.023600
75 0.902600
100 0.805700
125 0.944200
|
AndreaMS/clasificador-peliculas
|
AndreaMS
| 2024-01-25T18:35:25Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T18:35:07Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-peliculas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-peliculas
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3477
- Accuracy: 0.9292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2986 | 1.0 | 6735 | 0.3240 | 0.8955 |
| 0.2255 | 2.0 | 13470 | 0.3254 | 0.9253 |
| 0.1367 | 3.0 | 20205 | 0.3477 | 0.9292 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoneStriker/Umbra-v2.1-MoE-4x10.7-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-25T18:35:18Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"vicgalle/CarbonBeagle-11B",
"Sao10K/Fimbulvetr-10.7B-v1",
"bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED",
"Yhyu13/LMCocktail-10.7B-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T18:23:47Z |
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- vicgalle/CarbonBeagle-11B
- Sao10K/Fimbulvetr-10.7B-v1
- bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
- Yhyu13/LMCocktail-10.7B-v1
---

# Umbra-v2.1-MoE-4x10.7
The [Umbra Series] is an offshoot of the [Lumosia Series] With the goal to be a General assistant that has a knack for story telling and RP/ERP
-What's New in v2.1?
Umbra v2.1 isn't just a simple update; it's like giving the model a double shot of espresso. Ive changed the models and prompts, in an attempt to make Umbra
not only your go-to assistant for general knowledge but also a great storyteller and RP/ERP companion.
-Longer Positive, Shorter Negative
In an effort to trick the gates into being less uptight, Ive added more positive prompts and snappier negative ones.
These changes are based on the model's strengths and, frankly, my whimsical preferences.
-Experimental, As Always
Remember, folks, "v2.1" doesn't mean it's superior to its predecessors – it's just another step in the quest.
It's the 'Empire Strikes Back' of our series – could be better, could be worse, but definitely more dramatic.
-Base Context and Coherence
Umbra v2.1 has a base context of 8k scrolling window.
-The Tavern Card
Just for fun - the Umbra Personality Tavern Card. It's your gateway to immersive storytelling experiences,
a little like having a 'Choose Your Own Adventure' book, but way cooler because it's digital and doesn't get lost under your bed.
-Token Error? Fixed!
Umbra-v2 had a tokenizer error but was removed faster than you can say "Cops love Donuts"
So, give Umbra v2.1 a whirl and let me know how it goes. Your feedback is like the secret sauce in my development burger.
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg: 73.59
* ARC: 69.11
* HellaSwag: 87.57
* MMLU: 66.48
* T-QA: 66.75
* Winogrande: 83.11
* GSM8K: 68.69
## Examples:
```
posted soon
```
```
posted soon
```
## 🧩 Configuration
```
base_model: vicgalle/CarbonBeagle-11B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: vicgalle/CarbonBeagle-11B
positive_prompts: [Revamped]
- source_model: Sao10K/Fimbulvetr-10.7B-v1
positive_prompts: [Revamped]
- source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
positive_prompts: [Revamped]
- source_model: Yhyu13/LMCocktail-10.7B-v1
positive_prompts: [Revamed]
```
```
Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
* [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
* [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
* [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Umbra-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
cutycat2000/InterDiffusion-2.5
|
cutycat2000
| 2024-01-25T18:30:02Z | 12 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"diffusion",
"free",
"freeai",
"ai",
"interdiffusion",
"text-to-image",
"dataset:ShoukanLabs/LAION-DallE-3-Local",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-25T00:45:32Z |
---
license: mit
library_name: diffusers
pipeline_tag: text-to-image
datasets:
- ShoukanLabs/LAION-DallE-3-Local
tags:
- diffusion
- free
- freeai
- ai
- interdiffusion
---
# InterDiffusion-2.5
- Size: 6.5 GB
- Text: Partly
- Prompt: 70 words - sentences
- Base: InterDiffusion-2
- Demo: [https://discord.gg/sCwc2vmbBy](https://discord.gg/sCwc2vmbBy)
## Examples
### Example 1
**Prompt:**
a vivid image of a surreal landscape where the sky is a blend of colors, and floating islands are surrounded by mist
**Result:**

### Example 2
**Prompt:**
imaginative scene featuring a futuristic cityscape with towering skyscrapers, flying cars, and neon lights illuminating the night
**Result:**

|
LoneStriker/Umbra-v2.1-MoE-4x10.7-2.4bpw-h6-exl2
|
LoneStriker
| 2024-01-25T18:23:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"mergekit",
"vicgalle/CarbonBeagle-11B",
"Sao10K/Fimbulvetr-10.7B-v1",
"bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED",
"Yhyu13/LMCocktail-10.7B-v1",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T18:13:00Z |
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- vicgalle/CarbonBeagle-11B
- Sao10K/Fimbulvetr-10.7B-v1
- bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
- Yhyu13/LMCocktail-10.7B-v1
---

# Umbra-v2.1-MoE-4x10.7
The [Umbra Series] is an offshoot of the [Lumosia Series] With the goal to be a General assistant that has a knack for story telling and RP/ERP
-What's New in v2.1?
Umbra v2.1 isn't just a simple update; it's like giving the model a double shot of espresso. Ive changed the models and prompts, in an attempt to make Umbra
not only your go-to assistant for general knowledge but also a great storyteller and RP/ERP companion.
-Longer Positive, Shorter Negative
In an effort to trick the gates into being less uptight, Ive added more positive prompts and snappier negative ones.
These changes are based on the model's strengths and, frankly, my whimsical preferences.
-Experimental, As Always
Remember, folks, "v2.1" doesn't mean it's superior to its predecessors – it's just another step in the quest.
It's the 'Empire Strikes Back' of our series – could be better, could be worse, but definitely more dramatic.
-Base Context and Coherence
Umbra v2.1 has a base context of 8k scrolling window.
-The Tavern Card
Just for fun - the Umbra Personality Tavern Card. It's your gateway to immersive storytelling experiences,
a little like having a 'Choose Your Own Adventure' book, but way cooler because it's digital and doesn't get lost under your bed.
-Token Error? Fixed!
Umbra-v2 had a tokenizer error but was removed faster than you can say "Cops love Donuts"
So, give Umbra v2.1 a whirl and let me know how it goes. Your feedback is like the secret sauce in my development burger.
```
### System:
### USER:{prompt}
### Assistant:
```
Settings:
```
Temp: 1.0
min-p: 0.02-0.1
```
## Evals:
* Avg: 73.59
* ARC: 69.11
* HellaSwag: 87.57
* MMLU: 66.48
* T-QA: 66.75
* Winogrande: 83.11
* GSM8K: 68.69
## Examples:
```
posted soon
```
```
posted soon
```
## 🧩 Configuration
```
base_model: vicgalle/CarbonBeagle-11B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: vicgalle/CarbonBeagle-11B
positive_prompts: [Revamped]
- source_model: Sao10K/Fimbulvetr-10.7B-v1
positive_prompts: [Revamped]
- source_model: bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED
positive_prompts: [Revamped]
- source_model: Yhyu13/LMCocktail-10.7B-v1
positive_prompts: [Revamed]
```
```
Umbra-v2-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
* [vicgalle/CarbonBeagle-11B](https://huggingface.co/vicgalle/CarbonBeagle-11B)
* [Sao10K/Fimbulvetr-10.7B-v1](https://huggingface.co/Sao10K/Fimbulvetr-10.7B-v1)
* [bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED](https://huggingface.co/bn22/Nous-Hermes-2-SOLAR-10.7B-MISALIGNED)
* [Yhyu13/LMCocktail-10.7B-v1](https://huggingface.co/Yhyu13/LMCocktail-10.7B-v1)
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Umbra-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
tanatapanun/fine-tuned-BioBART-50-epochs-1024-input-192-output
|
tanatapanun
| 2024-01-25T18:23:12Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T17:11:13Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-50-epochs-1024-input-192-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-50-epochs-1024-input-192-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4193
- Rouge1: 0.1671
- Rouge2: 0.0236
- Rougel: 0.1346
- Rougelsum: 0.135
- Gen Len: 39.47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 8.7921 | 0.0814 | 0.0009 | 0.0732 | 0.0736 | 37.05 |
| No log | 2.0 | 302 | 4.1000 | 0.0056 | 0.0014 | 0.0055 | 0.0054 | 4.67 |
| No log | 3.0 | 453 | 1.4452 | 0.1244 | 0.0299 | 0.1124 | 0.1108 | 21.03 |
| 5.9057 | 4.0 | 604 | 1.2534 | 0.1025 | 0.0216 | 0.0814 | 0.0812 | 23.24 |
| 5.9057 | 5.0 | 755 | 1.1841 | 0.1353 | 0.0266 | 0.1003 | 0.1 | 51.9 |
| 5.9057 | 6.0 | 906 | 1.1373 | 0.0884 | 0.0207 | 0.0708 | 0.0711 | 20.47 |
| 1.0961 | 7.0 | 1057 | 1.1075 | 0.1223 | 0.0317 | 0.0901 | 0.0905 | 33.73 |
| 1.0961 | 8.0 | 1208 | 1.1046 | 0.1597 | 0.036 | 0.1304 | 0.1309 | 32.43 |
| 1.0961 | 9.0 | 1359 | 1.0959 | 0.1452 | 0.0259 | 0.1183 | 0.1181 | 30.69 |
| 0.7053 | 10.0 | 1510 | 1.1090 | 0.1672 | 0.0374 | 0.1244 | 0.124 | 40.15 |
| 0.7053 | 11.0 | 1661 | 1.1060 | 0.1739 | 0.032 | 0.1392 | 0.1379 | 30.7 |
| 0.7053 | 12.0 | 1812 | 1.1295 | 0.1754 | 0.0341 | 0.1313 | 0.1296 | 48.33 |
| 0.7053 | 13.0 | 1963 | 1.1343 | 0.1911 | 0.0546 | 0.1616 | 0.1591 | 36.76 |
| 0.4358 | 14.0 | 2114 | 1.1602 | 0.1759 | 0.0355 | 0.1406 | 0.1408 | 31.15 |
| 0.4358 | 15.0 | 2265 | 1.1586 | 0.1957 | 0.045 | 0.1537 | 0.153 | 33.9 |
| 0.4358 | 16.0 | 2416 | 1.1901 | 0.1782 | 0.0474 | 0.1507 | 0.1498 | 35.08 |
| 0.2375 | 17.0 | 2567 | 1.2039 | 0.1868 | 0.0419 | 0.155 | 0.1537 | 31.81 |
| 0.2375 | 18.0 | 2718 | 1.2239 | 0.1882 | 0.0497 | 0.154 | 0.1518 | 34.44 |
| 0.2375 | 19.0 | 2869 | 1.2359 | 0.1684 | 0.0316 | 0.1386 | 0.138 | 34.03 |
| 0.1161 | 20.0 | 3020 | 1.2349 | 0.1773 | 0.0447 | 0.1482 | 0.1467 | 34.32 |
| 0.1161 | 21.0 | 3171 | 1.2541 | 0.1933 | 0.0442 | 0.1632 | 0.1621 | 35.39 |
| 0.1161 | 22.0 | 3322 | 1.2735 | 0.1809 | 0.0315 | 0.1492 | 0.1478 | 30.5 |
| 0.1161 | 23.0 | 3473 | 1.2864 | 0.1737 | 0.0311 | 0.1381 | 0.1377 | 41.44 |
| 0.0529 | 24.0 | 3624 | 1.2933 | 0.1748 | 0.0317 | 0.1417 | 0.1395 | 38.1 |
| 0.0529 | 25.0 | 3775 | 1.2963 | 0.1691 | 0.0346 | 0.1402 | 0.1395 | 38.66 |
| 0.0529 | 26.0 | 3926 | 1.3108 | 0.1754 | 0.0362 | 0.1435 | 0.1422 | 36.24 |
| 0.0272 | 27.0 | 4077 | 1.3213 | 0.1844 | 0.0369 | 0.1438 | 0.143 | 38.48 |
| 0.0272 | 28.0 | 4228 | 1.3274 | 0.1867 | 0.0263 | 0.1494 | 0.1492 | 48.51 |
| 0.0272 | 29.0 | 4379 | 1.3518 | 0.1799 | 0.0395 | 0.1466 | 0.146 | 37.97 |
| 0.0169 | 30.0 | 4530 | 1.3385 | 0.1693 | 0.0223 | 0.1408 | 0.1404 | 38.08 |
| 0.0169 | 31.0 | 4681 | 1.3528 | 0.1646 | 0.0419 | 0.1298 | 0.1304 | 39.36 |
| 0.0169 | 32.0 | 4832 | 1.3581 | 0.1618 | 0.0266 | 0.133 | 0.1324 | 39.97 |
| 0.0169 | 33.0 | 4983 | 1.3608 | 0.1686 | 0.0305 | 0.1359 | 0.1366 | 40.58 |
| 0.0112 | 34.0 | 5134 | 1.3631 | 0.1849 | 0.0361 | 0.1548 | 0.1526 | 39.22 |
| 0.0112 | 35.0 | 5285 | 1.3679 | 0.1773 | 0.0237 | 0.1419 | 0.1412 | 42.64 |
| 0.0112 | 36.0 | 5436 | 1.3674 | 0.1906 | 0.0319 | 0.1454 | 0.1447 | 42.92 |
| 0.009 | 37.0 | 5587 | 1.3893 | 0.169 | 0.0286 | 0.1364 | 0.1368 | 42.74 |
| 0.009 | 38.0 | 5738 | 1.3892 | 0.1617 | 0.0239 | 0.1265 | 0.1267 | 43.14 |
| 0.009 | 39.0 | 5889 | 1.3976 | 0.1378 | 0.02 | 0.1063 | 0.1074 | 36.67 |
| 0.0065 | 40.0 | 6040 | 1.4050 | 0.1699 | 0.0284 | 0.1344 | 0.1351 | 44.07 |
| 0.0065 | 41.0 | 6191 | 1.4019 | 0.1687 | 0.0296 | 0.1305 | 0.1311 | 40.0 |
| 0.0065 | 42.0 | 6342 | 1.4126 | 0.14 | 0.0202 | 0.1102 | 0.1115 | 36.65 |
| 0.0065 | 43.0 | 6493 | 1.4029 | 0.1716 | 0.0275 | 0.13 | 0.1308 | 43.95 |
| 0.0052 | 44.0 | 6644 | 1.4133 | 0.1784 | 0.0255 | 0.1431 | 0.1432 | 40.4 |
| 0.0052 | 45.0 | 6795 | 1.4173 | 0.1647 | 0.0226 | 0.1336 | 0.1332 | 40.33 |
| 0.0052 | 46.0 | 6946 | 1.4155 | 0.1743 | 0.0251 | 0.1407 | 0.141 | 38.74 |
| 0.0043 | 47.0 | 7097 | 1.4197 | 0.1659 | 0.0257 | 0.1325 | 0.1336 | 38.49 |
| 0.0043 | 48.0 | 7248 | 1.4213 | 0.1752 | 0.0265 | 0.1397 | 0.1401 | 40.02 |
| 0.0043 | 49.0 | 7399 | 1.4202 | 0.1703 | 0.0239 | 0.1377 | 0.1379 | 38.96 |
| 0.0039 | 50.0 | 7550 | 1.4193 | 0.1671 | 0.0236 | 0.1346 | 0.135 | 39.47 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
maldred/Reinforce-pole
|
maldred
| 2024-01-25T18:16:14Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T18:16:06Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
anantg/Zephyr-7b-beta-merged
|
anantg
| 2024-01-25T18:08:16Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-25T18:06:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arun100/whisper-base-bn-chk-5
|
arun100
| 2024-01-25T18:06:26Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:arun100/whisper-base-bn",
"base_model:finetune:arun100/whisper-base-bn",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-25T17:31:00Z |
---
language:
- bn
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
base_model: arun100/whisper-base-bn
model-index:
- name: Whisper Base Bengali
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_16_0 bn
type: mozilla-foundation/common_voice_16_0
config: bn
split: test
args: bn
metrics:
- type: wer
value: 35.60262364321316
name: Wer
- type: wer
value: 29.87
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Bengali
This model is a fine-tuned version of [arun100/whisper-base-bn](https://huggingface.co/arun100/whisper-base-bn) on the mozilla-foundation/common_voice_16_0 bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2671
- Wer: 35.6026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2423 | 1.72 | 500 | 0.2710 | 35.9570 |
| 0.2329 | 3.43 | 1000 | 0.2671 | 35.6026 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
milaidy/bootoshi
|
milaidy
| 2024-01-25T17:54:19Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-25T17:48:20Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bootoshi Dreambooth model trained by milaidy with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
TromeroResearch/SciMistral-V1
|
TromeroResearch
| 2024-01-25T17:54:05Z | 8 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:arxiv_dataset",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T13:28:49Z |
---
library_name: transformers
license: mit
datasets:
- arxiv_dataset
language:
- en
pipeline_tag: text-generation
---
# Model Card for SciMistral-V1
The SciMistral-V1 Large Language Model (LLM) is an improved fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
This model was fine-tuned using the [arxiv-dataset](https://www.kaggle.com/datasets/Cornell-University/arxiv), in particular, using abstracts from a variety of scientific papers.
For our article explaining more on how we did this, please check out our [website](https://www.tromero.ai/articles)!
## How to Get Started with the Model
To run this model for yourself:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("TromeroResearch/SciMistral-V1")
tokenizer = AutoTokenizer.from_pretrained("TromeroResearch/SciMistral-V1")
prompt = "This paper seeks to disprove that 1+1=2"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_length=150, num_return_sequences=1, repetition_penalty=1.2, top_k=50, top_p=0.95, temperature=1.0)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Sample results
Regular [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-v0.1):
Abstract start:
```markdown
This paper seeks to disprove that 1+1=2
```
Generated completion:
```markdown
The first thing we need to do is define what a number is, and how it relates to the world around us. A number can be defined as an abstract concept used for counting or measuring things in our environment. It’s important to note here that numbers are not physical objects; they exist only within our minds…
```
And so on… A pretty weak, unscientific and generic response, something you’d expect from someone pretending to know what they’re talking about. But with our model:
[SciMistral](https://huggingface.co/Tromero/SciMistral-V1):
Abstract start:
```markdown
This paper seeks to disprove that 1+1=2
```
Generated completion:
```markdown
We do this by showing that the equation is not true in all models of arithmetic, and hence it cannot be proved from Peano Arithmetic (PA). The proof uses a technique called "forcing" which was introduced into mathematical logic by Cohen in his famous work on forcing for set theory. We show how to use forcing to construct a model of PA where 1+1 does not equal 2…
```
And it continues. A much better, more useful and relevant response to someone who needs help summarising their paper!
## Hardware
4 x Nvidia A6000 GPUs
## Limitations
The SciMistral model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
sheduele/distil_task_C
|
sheduele
| 2024-01-25T17:44:40Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-11-28T11:50:46Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distil_task_C
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_task_C
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 462.2090
- Mae: 10.1956
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 18
- eval_batch_size: 18
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 203 | 7191.6465 | 55.0685 |
| No log | 2.0 | 406 | 5555.1792 | 43.4994 |
| 7733.7615 | 3.0 | 609 | 4179.3813 | 35.0103 |
| 7733.7615 | 4.0 | 812 | 3130.1318 | 27.5621 |
| 4115.3178 | 5.0 | 1015 | 2453.7505 | 23.4578 |
| 4115.3178 | 6.0 | 1218 | 1842.0889 | 19.4926 |
| 4115.3178 | 7.0 | 1421 | 1518.6580 | 17.3858 |
| 2159.565 | 8.0 | 1624 | 1207.9269 | 14.8793 |
| 2159.565 | 9.0 | 1827 | 1017.0433 | 13.6738 |
| 1328.2381 | 10.0 | 2030 | 1088.5262 | 15.4227 |
| 1328.2381 | 11.0 | 2233 | 758.3786 | 11.8081 |
| 1328.2381 | 12.0 | 2436 | 789.0392 | 12.4503 |
| 902.9826 | 13.0 | 2639 | 640.3048 | 11.4809 |
| 902.9826 | 14.0 | 2842 | 568.1979 | 10.7808 |
| 702.5015 | 15.0 | 3045 | 532.1036 | 10.4821 |
| 702.5015 | 16.0 | 3248 | 505.8835 | 10.4420 |
| 702.5015 | 17.0 | 3451 | 511.9482 | 10.7153 |
| 595.909 | 18.0 | 3654 | 503.0000 | 10.6231 |
| 595.909 | 19.0 | 3857 | 483.5316 | 10.4885 |
| 485.5158 | 20.0 | 4060 | 462.2090 | 10.1956 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
hugomssantos/roberta-base-kaggle
|
hugomssantos
| 2024-01-25T17:40:53Z | 47 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"token-classification",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-22T18:42:47Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: hugomssantos/roberta-base-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hugomssantos/roberta-base-kaggle
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0158
- Validation Loss: 0.0106
- Train Precision: 0.9132
- Train Recall: 0.9612
- Train F1: 0.9366
- Train Accuracy: 0.9972
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 414, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.2683 | 0.0314 | 0.8 | 0.7978 | 0.7989 | 0.9896 | 0 |
| 0.0287 | 0.0132 | 0.9105 | 0.9584 | 0.9339 | 0.9959 | 1 |
| 0.0158 | 0.0106 | 0.9132 | 0.9612 | 0.9366 | 0.9972 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
asun17904/imdb-gpt2-a2b2
|
asun17904
| 2024-01-25T17:38:04Z | 1 | 0 |
pytorch
|
[
"pytorch",
"gpt2",
"en",
"license:mit",
"region:us"
] | null | 2024-01-25T01:12:16Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.01
- `alpha` = 2.0
- `beta` = 2.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|7.057|0.872|1.0|
|6.308|0.919|2.0|
|6.182|0.927|3.0|
|6.121|0.930|4.0|
|6.134|0.930|5.0|
|6.113|0.931|6.0|
|6.074|0.934|7.0|
|6.085|0.933|8.0|
|6.101|0.932|9.0|
|6.024|0.937|10.0|
|6.173|0.927|11.0|
|6.043|0.936|12.0|
|6.049|0.935|13.0|
|6.329|0.918|14.0|
|6.098|0.932|15.0|
|6.040|0.936|16.0|
|6.024|0.937|17.0|
|6.115|0.931|18.0|
|6.112|0.932|19.0|
|6.019|0.937|20.0|
|6.019|0.937|21.0|
|5.997|0.939|22.0|
|6.016|0.938|23.0|
|6.026|0.937|24.0|
|5.959|0.941|25.0|
|5.993|0.939|26.0|
|6.022|0.937|27.0|
|6.003|0.939|28.0|
|5.991|0.939|29.0|
|6.105|0.932|30.0|
|5.967|0.940|31.0|
|5.983|0.940|32.0|
|5.997|0.939|33.0|
|6.007|0.938|34.0|
|5.971|0.940|35.0|
|5.988|0.939|36.0|
|5.957|0.941|37.0|
|5.965|0.941|38.0|
|5.957|0.941|39.0|
|5.977|0.940|40.0|
|5.985|0.940|41.0|
|5.967|0.940|42.0|
|5.992|0.939|43.0|
|5.949|0.941|44.0|
|5.961|0.941|45.0|
|5.968|0.941|46.0|
|5.964|0.941|47.0|
|5.956|0.941|48.0|
|5.963|0.941|49.0|
|
Bsbell21/LLMPromptGen-AI
|
Bsbell21
| 2024-01-25T17:36:26Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-25T17:30:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Devdeshitha/Mistral_testing
|
Devdeshitha
| 2024-01-25T17:31:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T17:31:21Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
spidersouris/genre-t5-small
|
spidersouris
| 2024-01-25T17:25:03Z | 130 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T17:13:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: t5-small
model-index:
- name: results_t5small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_t5small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0235 | 0.12 | 500 | 0.0144 |
| 0.0151 | 0.25 | 1000 | 0.0119 |
| 0.0153 | 0.37 | 1500 | 0.0101 |
| 0.0173 | 0.5 | 2000 | 0.0109 |
| 0.0132 | 0.62 | 2500 | 0.0100 |
| 0.0123 | 0.75 | 3000 | 0.0092 |
| 0.0136 | 0.87 | 3500 | 0.0110 |
| 0.014 | 0.99 | 4000 | 0.0089 |
| 0.0123 | 1.12 | 4500 | 0.0094 |
| 0.0155 | 1.24 | 5000 | 0.0088 |
| 0.0119 | 1.37 | 5500 | 0.0095 |
| 0.0126 | 1.49 | 6000 | 0.0089 |
| 0.0111 | 1.61 | 6500 | 0.0083 |
| 0.0129 | 1.74 | 7000 | 0.0081 |
| 0.0115 | 1.86 | 7500 | 0.0087 |
| 0.0133 | 1.99 | 8000 | 0.0085 |
| 0.0167 | 2.11 | 8500 | 0.0105 |
| 0.0113 | 2.24 | 9000 | 0.0088 |
| 0.0106 | 2.36 | 9500 | 0.0077 |
| 0.0101 | 2.48 | 10000 | 0.0075 |
| 0.0086 | 2.61 | 10500 | 0.0076 |
| 0.0112 | 2.73 | 11000 | 0.0075 |
| 0.0097 | 2.86 | 11500 | 0.0076 |
| 0.0093 | 2.98 | 12000 | 0.0078 |
| 0.0086 | 3.1 | 12500 | 0.0076 |
| 0.0092 | 3.23 | 13000 | 0.0077 |
| 0.0089 | 3.35 | 13500 | 0.0074 |
| 0.0086 | 3.48 | 14000 | 0.0072 |
| 0.0084 | 3.6 | 14500 | 0.0072 |
| 0.0083 | 3.73 | 15000 | 0.0073 |
| 0.0076 | 3.85 | 15500 | 0.0072 |
| 0.0097 | 3.97 | 16000 | 0.0071 |
| 0.0074 | 4.1 | 16500 | 0.0071 |
| 0.0074 | 4.22 | 17000 | 0.0070 |
| 0.0089 | 4.35 | 17500 | 0.0069 |
| 0.0072 | 4.47 | 18000 | 0.0069 |
| 0.0072 | 4.6 | 18500 | 0.0068 |
| 0.0084 | 4.72 | 19000 | 0.0067 |
| 0.0076 | 4.84 | 19500 | 0.0072 |
| 0.0084 | 4.97 | 20000 | 0.0067 |
| 0.0062 | 5.09 | 20500 | 0.0065 |
| 0.007 | 5.22 | 21000 | 0.0064 |
| 0.0075 | 5.34 | 21500 | 0.0063 |
| 0.0067 | 5.46 | 22000 | 0.0064 |
| 0.0063 | 5.59 | 22500 | 0.0065 |
| 0.007 | 5.71 | 23000 | 0.0062 |
| 0.0064 | 5.84 | 23500 | 0.0062 |
| 0.0073 | 5.96 | 24000 | 0.0062 |
| 0.0064 | 6.09 | 24500 | 0.0063 |
| 0.0059 | 6.21 | 25000 | 0.0060 |
| 0.0062 | 6.33 | 25500 | 0.0060 |
| 0.0057 | 6.46 | 26000 | 0.0060 |
| 0.006 | 6.58 | 26500 | 0.0058 |
| 0.0053 | 6.71 | 27000 | 0.0058 |
| 0.0064 | 6.83 | 27500 | 0.0057 |
| 0.0061 | 6.95 | 28000 | 0.0057 |
| 0.0055 | 7.08 | 28500 | 0.0057 |
| 0.0065 | 7.2 | 29000 | 0.0057 |
| 0.0058 | 7.33 | 29500 | 0.0057 |
| 0.0052 | 7.45 | 30000 | 0.0055 |
| 0.0058 | 7.58 | 30500 | 0.0056 |
| 0.0055 | 7.7 | 31000 | 0.0055 |
| 0.0049 | 7.82 | 31500 | 0.0054 |
| 0.0053 | 7.95 | 32000 | 0.0053 |
| 0.005 | 8.07 | 32500 | 0.0054 |
| 0.0051 | 8.2 | 33000 | 0.0053 |
| 0.0047 | 8.32 | 33500 | 0.0053 |
| 0.0057 | 8.45 | 34000 | 0.0052 |
| 0.0049 | 8.57 | 34500 | 0.0053 |
| 0.0051 | 8.69 | 35000 | 0.0052 |
| 0.0047 | 8.82 | 35500 | 0.0052 |
| 0.0051 | 8.94 | 36000 | 0.0051 |
| 0.0044 | 9.07 | 36500 | 0.0052 |
| 0.0042 | 9.19 | 37000 | 0.0052 |
| 0.0045 | 9.31 | 37500 | 0.0051 |
| 0.0043 | 9.44 | 38000 | 0.0051 |
| 0.0041 | 9.56 | 38500 | 0.0051 |
| 0.0042 | 9.69 | 39000 | 0.0051 |
| 0.004 | 9.81 | 39500 | 0.0050 |
| 0.0038 | 9.94 | 40000 | 0.0050 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.1
|
GedByrne/ppo-SnowballTarget
|
GedByrne
| 2024-01-25T17:18:40Z | 1 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-25T17:18:35Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: GedByrne/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
MoulikBansal/fine-tuned-on-mcq-phi_1_5_new_version_2
|
MoulikBansal
| 2024-01-25T17:17:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T17:17:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wimai/rapgenerator
|
wimai
| 2024-01-25T17:11:43Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"text2text-generation",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] |
text2text-generation
| 2024-01-25T17:07:31Z |
---
library_name: peft
base_model: google/flan-t5-base
pipeline_tag: text2text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Athuin/Llama-7b-med-GGUF
|
Athuin
| 2024-01-25T17:05:11Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"en",
"base_model:unsloth/llama-2-7b",
"base_model:quantized:unsloth/llama-2-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T16:48:15Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-2-7b
---
# Uploaded model
- **Developed by:** Athuin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SharonTudi/CONTEXT_two
|
SharonTudi
| 2024-01-25T17:04:53Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T17:04:39Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: CONTEXT_two
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CONTEXT_two
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9806
- Precision: 0.6800
- Recall: 0.6711
- F1: 0.6715
- Accuracy: 0.6711
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.3915 | 0.31 | 15 | 1.3670 | 0.4926 | 0.4211 | 0.3467 | 0.4211 |
| 1.3405 | 0.62 | 30 | 1.3368 | 0.6504 | 0.4737 | 0.4176 | 0.4737 |
| 1.3104 | 0.94 | 45 | 1.2744 | 0.6319 | 0.5526 | 0.5575 | 0.5526 |
| 1.2038 | 1.25 | 60 | 1.1973 | 0.6648 | 0.6053 | 0.6130 | 0.6053 |
| 1.1289 | 1.56 | 75 | 1.1413 | 0.6826 | 0.6842 | 0.6827 | 0.6842 |
| 1.0146 | 1.88 | 90 | 1.0708 | 0.6920 | 0.6316 | 0.6313 | 0.6316 |
| 0.9391 | 2.19 | 105 | 1.0283 | 0.6488 | 0.6184 | 0.6174 | 0.6184 |
| 0.9295 | 2.5 | 120 | 0.9971 | 0.7229 | 0.6974 | 0.7025 | 0.6974 |
| 0.8569 | 2.81 | 135 | 0.9806 | 0.6800 | 0.6711 | 0.6715 | 0.6711 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
demonsu/orion-14b-longchat-gguf
|
demonsu
| 2024-01-25T17:01:52Z | 7 | 0 | null |
[
"gguf",
"region:us"
] | null | 2024-01-25T11:01:20Z |
original :https://huggingface.co/OrionStarAI/Orion-14B-LongChat
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.