modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
OpenPipe/mistral-ft-optimized-1227
|
OpenPipe
| 2024-01-24T01:58:45Z | 5,281 | 82 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:finetune:Intel/neural-chat-7b-v3-3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-27T14:51:48Z |
---
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- Intel/neural-chat-7b-v3-3
- meta-math/MetaMath-Mistral-7B
- openchat/openchat-3.5-1210
license: apache-2.0
---
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).
It is a hierarchichal SLERP merge of teknium/OpenHermes-2.5-Mistral-7B, Intel/neural-chat-7b-v3-3, meta-math/MetaMath-Mistral-7B, and openchat/openchat-3.5-1210. berkeley-nest/Starling-LM-7B-alpha was omitted from this version of the model.
|
chandc/distilbert-base-uncased-finetuned-cola
|
chandc
| 2024-01-24T01:50:39Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T01:29:25Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8372
- Matthews Correlation: 0.5189
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5226 | 1.0 | 535 | 0.4682 | 0.4576 |
| 0.3517 | 2.0 | 1070 | 0.5285 | 0.4869 |
| 0.2269 | 3.0 | 1605 | 0.6517 | 0.4967 |
| 0.1842 | 4.0 | 2140 | 0.7302 | 0.5163 |
| 0.1347 | 5.0 | 2675 | 0.8372 | 0.5189 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
atsstagram/distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000
|
atsstagram
| 2024-01-24T01:46:45Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T22:32:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-imbalanced-1000plus3000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1446
- Accuracy: 0.5835
- F1: 0.4990
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.4611 | 1.0 | 63 | 1.2669 | 0.532 | 0.4233 |
| 1.1433 | 2.0 | 126 | 1.1446 | 0.5835 | 0.4990 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.0
|
KimByeongSu/gpt-neo-125m-lama-finetuning
|
KimByeongSu
| 2024-01-24T01:44:47Z | 103 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-125m",
"base_model:finetune:EleutherAI/gpt-neo-125m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T01:34:06Z |
---
license: mit
base_model: EleutherAI/gpt-neo-125m
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-125m-lama-finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-125m-lama-finetuning
This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 386 | 3.4344 |
| 3.4698 | 2.0 | 772 | 3.4000 |
| 3.2617 | 3.0 | 1158 | 3.3933 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.14.6
- Tokenizers 0.15.0
|
ifuseok/ft-solar-10.7b-v2.1-dpo
|
ifuseok
| 2024-01-24T01:41:14Z | 2,283 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:nlpai-lab/databricks-dolly-15k-ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"dataset:KETI-AIR/kor_boolq",
"dataset:heegyu/open-korean-instructions",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T16:09:54Z |
---
language:
- ko
pipeline_tag: text-generation
datasets:
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
- KETI-AIR/kor_boolq
- heegyu/open-korean-instructions
license: cc-by-nc-sa-4.0
---
**Input** Models input text only.
**Output** Models generate text only.
**Base Model** [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated)
**Training Dataset**
- [nlpai-lab/databricks-dolly-15k-ko](https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- [heegyu/open-korean-instructions](heegyu/open-korean-instructions)
- [KETI-AIR/kor_boolq](https://huggingface.co/datasets/KETI-AIR/kor_boolq)
- [AIhub ์ํ ๋ฒ์ญ ๋ฐ์ดํฐ ์ผ๋ถ](https://aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71593)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/sft-solar-10.7b-v2.1-dpo"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
# Prompt Example
```
### System:
์์คํ
๋ฉ์์ง ์
๋๋ค.
### User:
์ ์ ์
๋๋ค.
### Assistant
์ด์์คํดํธ ์
๋๋ค.
```
|
ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still
|
ntc-ai
| 2024-01-24T01:26:09Z | 23 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-24T01:26:06Z |
---
language:
- en
thumbnail: "images/evaluate/back to the future film still.../back to the future film still_17_3.0.png"
widget:
- text: back to the future film still
output:
url: images/back to the future film still_17_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_19_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_20_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_21_3.0.png
- text: back to the future film still
output:
url: images/back to the future film still_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "back to the future film still"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - back to the future film still (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/back to the future film still_17_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_17_3.0.png" width=256 height=256 /> |
| <img src="images/back to the future film still_19_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_19_3.0.png" width=256 height=256 /> |
| <img src="images/back to the future film still_20_-3.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_0.0.png" width=256 height=256 /> | <img src="images/back to the future film still_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
back to the future film still
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.back-to-the-future-film-still', weight_name='back to the future film still.safetensors', adapter_name="back to the future film still")
# Activate the LoRA
pipe.set_adapters(["back to the future film still"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, back to the future film still"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
aayushg1713/test
|
aayushg1713
| 2024-01-24T01:17:05Z | 0 | 0 | null |
[
"en",
"dataset:OpenAssistant/oasst2",
"region:us"
] | null | 2024-01-24T01:16:12Z |
---
datasets:
- OpenAssistant/oasst2
language:
- en
metrics:
- bertscore
- accuracy
- code_eval
---
|
Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF
|
Kquant03
| 2024-01-24T01:07:49Z | 4 | 0 | null |
[
"gguf",
"merge",
"moe",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-20T11:31:54Z |
---
license: apache-2.0
language:
- en
tags:
- merge
- moe
thumbnail: ""
---

# Theoretically unstoppable. (Evals prove otherwise :/ )
A Convex frankenMoE. Created via improving the original Seraphim script. The models that were implemented are as follows:
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - base
- [PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0](https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0) - expert #1
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #2
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #3
- [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) - expert #4
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) - expert #5
- [kodonho/SolarM-SakuraSolar-SLERP](https://huggingface.co/kodonho/SolarM-SakuraSolar-SLERP) - expert #6
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #7
- [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) - expert #8
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [Q2_K Tiny](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 23.4 GB| 25.4 GB | smallest, significant quality loss - not recommended for most purposes |
| [Q3_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 30.5 GB| 32.5 GB | very small, high quality loss |
| [Q4_0](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 39.6 GB| 41.6 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Q4_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | ~39.6 GB| ~41.6 GB | medium, balanced quality - recommended |
| [Q5_0](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 48.2 GB| 50.2 GB | legacy; large, balanced quality |
| [Q5_K_M](https://huggingface.co/Kquant03/BurningBruce-SOLAR-8x10.7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~48.2 GB| ~50.2 GB | large, balanced quality - recommended |
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of โexpertsโ (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token โMoreโ is sent to the second expert, and the token "Parametersโ is sent to the first network. As weโll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the โexpertsโ) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but theyโve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), weโll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? Thatโs because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
|
traethethird/ppo-LunarLander-v2
|
traethethird
| 2024-01-24T01:07:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-24T01:07:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.18 +/- 14.66
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Fynd/cyclops_llamav2_13b_2_ep_intent
|
Fynd
| 2024-01-24T01:05:58Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2024-01-24T01:05:50Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2-dpo
|
AIFT
| 2024-01-24T01:04:16Z | 61 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T00:18:38Z |
---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct-dpo ๋ชจ๋ธ v1.2</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
kyujinpy ๋์ด ๊ณต๊ฐํ์ KOR-OpenOrca-Platypus ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ์ญ์ (์ํ๋ง) ๋ฐ ์ ์ ์์
์งํํ์ฌ ํ์ฉ.
๊ทธ ์ดํ ํด๋น ๋ฐ์ดํฐ๋ค์ ๋ณด๋ฉฐ ๊ด๋ จ ํ์คํฌ๋ฅผ ์ถ์ถํ์๊ณ ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์ ๋ฌธ์ ๋ฅผ gpt๋ฅผ ํตํด์ ๊ตฌ์ถํ์๊ณ ,
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ ํํ๋ฅผ ๋ณด๊ณ gpt๋ฅผ ํตํด ์ด๋ฑ ์์ค์ ๊ณผํ ์ํ ๋ฌธ์ ์ ํ์ ์ ์ 500๋ฌธ์
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด ๋ฐ์ดํฐ 4๋ง๊ฐ ์ ๋ ์ฌ์ฉํ์์ต๋๋ค.
<br>
<DPOํ์ต ๋ฐ์ดํฐ>
DPO ๋ฐ์ดํฐ๋ CommonGen๊ณผ TruthfulQA์ ์ด์ ์ ๋ง์ถ์ด ์ฝ 17,000๊ฐ์ ๋ฐ์ดํฐ๋ฅผ ํ์ตํ์์ต๋๋ค.
<br>
+ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.(์์ค ๊ด๋ จ ์ฐธ๊ฑฐ์ง ๋ฌธ์ )
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
jsfs11/WestOrcaNeuralMarco-DPO-v2-DARETIES-7B-GGUF
|
jsfs11
| 2024-01-24T01:02:39Z | 2 | 1 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"senseable/Westlake-7B-v2",
"decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"mlabonne/NeuralMarcoro14-7B",
"base_model:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:merge:decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP",
"base_model:mlabonne/NeuralMarcoro14-7B",
"base_model:merge:mlabonne/NeuralMarcoro14-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T06:24:03Z |
---
tags:
- merge
- mergekit
- lazymergekit
- senseable/Westlake-7B-v2
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- mlabonne/NeuralMarcoro14-7B
base_model:
- senseable/Westlake-7B-v2
- decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
- mlabonne/NeuralMarcoro14-7B
license: apache-2.0
---
# WestOrcaNeuralMarco-DPO-v2-DARETIES-7B
WestOrcaNeuralMarco-DPO-v2-DARETIES-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [senseable/Westlake-7B-v2](https://huggingface.co/senseable/Westlake-7B-v2)
* [decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP](https://huggingface.co/decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP)
* [mlabonne/NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)
## ๐งฉ Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: senseable/Westlake-7B-v2
parameters:
density: 0.73
weight: 0.4
- model: decruz07/kellemar-DPO-Orca-Distilled-7B-SLERP
parameters:
density: 0.55
weight: 0.3
- model: mlabonne/NeuralMarcoro14-7B
parameters:
density: 0.45
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
Credit to Maxime Labonne and his excellent blog [https://mlabonne.github.io/blog/](https://mlabonne.github.io/blog/).
|
hlillemark/my_awesome_food_model
|
hlillemark
| 2024-01-24T00:44:46Z | 176 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-23T22:59:22Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8640
- Accuracy: 0.573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-----:|:----:|:--------:|:---------------:|
| No log | 1.0 | 2 | 0.036 | 4.5210 |
| No log | 2.0 | 4 | 0.278 | 4.4151 |
| No log | 3.0 | 6 | 0.437 | 4.3629 |
| No log | 4.0 | 8 | 4.2960 | 0.547 |
| 4.3122 | 5.0 | 10 | 4.1697 | 0.589 |
| 4.3122 | 6.0 | 12 | 4.0601 | 0.568 |
| 4.3122 | 7.0 | 14 | 3.9770 | 0.521 |
| 4.3122 | 8.0 | 16 | 3.9177 | 0.539 |
| 4.3122 | 9.0 | 18 | 3.8843 | 0.545 |
| 3.9792 | 10.0 | 20 | 3.8640 | 0.573 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
AIFT/AIFT-ko-orca-plat-Yi-ko-6b-v1.2
|
AIFT
| 2024-01-24T00:41:00Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T00:17:41Z |
---
license: cc-by-sa-4.0
---
<h1>orca-platypus - instruct ๋ชจ๋ธ v1.2</h1>
<b><ํ์ต ๋ฐ์ดํฐ ๊ตฌ์ถ></b>
kyujinpy ๋์ด ๊ณต๊ฐํ์ KOR-OpenOrca-Platypus ๋ฐ์ดํฐ๋ฅผ ์ผ๋ถ ์ญ์ (์ํ๋ง) ๋ฐ ์ ์ ์์
์งํํ์ฌ ํ์ฉ.
๊ทธ ์ดํ ํด๋น ๋ฐ์ดํฐ๋ค์ ๋ณด๋ฉฐ ๊ด๋ จ ํ์คํฌ๋ฅผ ์ถ์ถํ์๊ณ ์ด๋ฅผ ๊ธฐ๋ฐ์ผ๋ก
ํด๋น ํ์คํฌ์ ๋ง์ถฐ์ NLP ๊ด๋ จ ์คํ์์ค ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ํ์ต๋ฐ์ดํฐ๋ฅผ ์์ฒด์ ์ผ๋ก
์ญ์ฌ, ๊ณผํ, ์ํ, ๊ธฐ๊ณ๋
ํด, ๋ฆฌ๋ทฐ ๋ถ์ ๋ฌธ์ ๋ฅผ gpt๋ฅผ ํตํด์ ๊ตฌ์ถํ์๊ณ ,
aihub ์ผ๋ฐ์์ ๋ฐ ๊ธฐ๊ณ๋
ํด ๋ฐ์ดํฐ๋ฅผ ํ์ฉํ์ฌ ์ถ๊ฐ๋ก ํ์ต ๋ฐ์ดํฐ๋ฅผ ๊ตฌ์ถ(ํํ์ ๊ด๋ จ, ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ๋ฐ ์์ฝ)
๊ฐ์ข
๋ธ๋ก๊ทธ์์ ์ญ์ฌ ๋ฐ ์์ ํด์ฆ๋ฅผ ์ฌ๋์ด ์ง์ ํ์ต๋ฐ์ดํฐ ํํ๋ก ๋ณ๊ฒฝ
AI2AI Challenge ๋ฐ์ดํฐ ํํ๋ฅผ ๋ณด๊ณ gpt๋ฅผ ํตํด ์ด๋ฑ ์์ค์ ๊ณผํ ์ํ ๋ฌธ์ ์ ํ์ ์ ์ 500๋ฌธ์
์์ด ๋ฒ์ญ ๋ฐ์ดํฐ ์ํ/ํ์ ๋ฐ์ดํฐ ํ์ต ๋ฐ์ดํฐ๋ก ํ์ฉ ์งํ
์ด ๋ฐ์ดํฐ 4๋ง๊ฐ ์ ๋ ์ฌ์ฉํ์์ต๋๋ค.
<br>
<br>
+ TruthfulQA ๊ด๋ จ ๋ฌธ์ ์ถ๊ฐ๋ฅผ ์งํํ์์ต๋๋ค.(์์ค ๊ด๋ จ ์ฐธ๊ฑฐ์ง ๋ฌธ์ )
+ ๊ธฐ๊ณ๋
ํด ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ๋ฅผ ChatGPT๋ฅผ ํตํด์ ๋ต๋ณ์ ์ป์ด ํ์ต
+ ๋ฌธ๋ฒ๊ด๋ จ ํ์ต ๋ฐ์ดํฐ
<br>
###ํ์ต ๋ฐ์ดํฐ ํ์ผ์ ๋น๊ณต๊ฐ์
๋๋ค.
<br>
<b><ํ์ต></b>
ํ์ต์ LoRA๋ฅผ ์ฌ์ฉํ์ฌ A100 40G *2์์ ํ์ต์ ์งํํ์์ต๋๋ค.
|
dctanner/sablo-pebble-mistral-dpo-lora-HelpSteer_binarized
|
dctanner
| 2024-01-24T00:40:19Z | 8 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:sablo/HelpSteer_binarized",
"base_model:sablo/sablo-pebble-mistral",
"base_model:adapter:sablo/sablo-pebble-mistral",
"license:apache-2.0",
"region:us"
] | null | 2024-01-18T15:14:46Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- sablo/HelpSteer_binarized
base_model: sablo/sablo-pebble-mistral
model-index:
- name: sablo-pebble-mistral-dpo-lora-HelpSteer_binarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sablo-pebble-mistral-dpo-lora-HelpSteer_binarized
This model is a fine-tuned version of [sablo/sablo-pebble-mistral](https://huggingface.co/sablo/sablo-pebble-mistral) on the sablo/HelpSteer_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5371
- Rewards/chosen: -0.9335
- Rewards/rejected: -1.6455
- Rewards/accuracies: 0.7264
- Rewards/margins: 0.7121
- Logps/rejected: -298.0735
- Logps/chosen: -253.4149
- Logits/rejected: -2.4554
- Logits/chosen: -2.5093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6874 | 0.1 | 100 | 0.6892 | 0.0213 | 0.0133 | 0.6698 | 0.0080 | -132.1924 | -157.9395 | -2.4463 | -2.4843 |
| 0.6592 | 0.2 | 200 | 0.6594 | 0.0055 | -0.0704 | 0.6698 | 0.0759 | -140.5588 | -159.5180 | -2.4922 | -2.5370 |
| 0.5451 | 0.3 | 300 | 0.5867 | -0.4490 | -0.7587 | 0.6863 | 0.3097 | -209.3938 | -204.9713 | -2.5128 | -2.5620 |
| 0.4933 | 0.39 | 400 | 0.5591 | -0.6060 | -1.1029 | 0.7146 | 0.4968 | -243.8062 | -220.6713 | -2.4868 | -2.5386 |
| 0.5271 | 0.49 | 500 | 0.5488 | -0.6712 | -1.2738 | 0.7193 | 0.6026 | -260.8958 | -227.1889 | -2.4784 | -2.5312 |
| 0.4594 | 0.59 | 600 | 0.5418 | -0.7977 | -1.4672 | 0.7311 | 0.6695 | -280.2420 | -239.8430 | -2.4672 | -2.5200 |
| 0.5444 | 0.69 | 700 | 0.5358 | -0.7688 | -1.4528 | 0.7335 | 0.6840 | -278.8014 | -236.9531 | -2.4594 | -2.5127 |
| 0.5755 | 0.79 | 800 | 0.5405 | -1.0672 | -1.7631 | 0.7311 | 0.6959 | -309.8293 | -266.7906 | -2.4585 | -2.5118 |
| 0.5495 | 0.89 | 900 | 0.5371 | -0.9321 | -1.6450 | 0.7288 | 0.7129 | -298.0242 | -253.2804 | -2.4558 | -2.5096 |
| 0.5948 | 0.98 | 1000 | 0.5371 | -0.9335 | -1.6455 | 0.7264 | 0.7121 | -298.0735 | -253.4149 | -2.4554 | -2.5093 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.6
- Tokenizers 0.15.0
|
varun-v-rao/t5-base-snli
|
varun-v-rao
| 2024-01-24T00:38:08Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T04:13:23Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-snli
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3813 | 1.0 | 2146 | 0.3113 | 0.8875 |
| 0.3443 | 2.0 | 4292 | 0.2864 | 0.8966 |
| 0.3305 | 3.0 | 6438 | 0.2842 | 0.8982 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
sally9805/bert-base-uncased-finetuned-coha-1900s
|
sally9805
| 2024-01-24T00:36:23Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-22T23:15:04Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-coha-1900s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-coha-1900s
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5479
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7671 | 1.0 | 22219 | 2.5899 |
| 2.7099 | 2.0 | 44438 | 2.5504 |
| 2.7271 | 3.0 | 66657 | 2.5498 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
oosij/llama-2-ko-7b-ft-emo-multi
|
oosij
| 2024-01-24T00:22:17Z | 0 | 0 |
peft
|
[
"peft",
"base_model:beomi/llama-2-ko-7b",
"base_model:adapter:beomi/llama-2-ko-7b",
"region:us"
] | null | 2024-01-24T00:18:49Z |
---
library_name: peft
base_model: beomi/llama-2-ko-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
์์ง Study ์ค์ธ ๋ฉํฐ ํด ์ฑ๋ด ๋ชจ๋ธ.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
<Prompt Template>
์ด์ ๋ํ์ ํ์ฌ ๋ํ์ ๋ช
๋ น์ด๋ฅผ ์ฐธ๊ณ ํ์ฌ ์ํฉ์ ๊ณต๊ฐํ๊ณ ์น์ ํ ์๋ต์ ์์ฑํด์ฃผ์ธ์. ์๋ต ๋ง์ง๋ง์๋ ์ง๊ธ๊น์ง์ ๋ด์ฉ๊ณผ ๊ด๋ จ๋ ์ง๋ฌธ์ ํด์ฃผ์ธ์.
[์ด์ ๋ํ]
{}
[ํ์ฌ ๋ํ]
### ๋ช
๋ น์ด:
{}
### ์๋ต:
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.2
|
tali1/autotrain-gpt2-gpu3
|
tali1
| 2024-01-24T00:11:18Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T00:11:17Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
yoon1000/ft_0124_korean_1
|
yoon1000
| 2024-01-24T00:11:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-23T07:08:34Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: ft_0124_korean_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_0124_korean_1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4593
- Cer: 0.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 33.156 | 0.44 | 500 | 10.0563 | 1.0 |
| 4.9299 | 0.88 | 1000 | 4.8856 | 1.0 |
| 4.6283 | 1.33 | 1500 | 4.5959 | 1.0 |
| 4.4245 | 1.77 | 2000 | 4.2900 | 0.9513 |
| 3.8155 | 2.21 | 2500 | 2.7733 | 0.5324 |
| 2.6597 | 2.65 | 3000 | 2.0091 | 0.4216 |
| 2.1347 | 3.09 | 3500 | 1.5842 | 0.3535 |
| 1.7847 | 3.53 | 4000 | 1.3425 | 0.3124 |
| 1.6031 | 3.98 | 4500 | 1.1478 | 0.2750 |
| 1.3867 | 4.42 | 5000 | 0.9914 | 0.2466 |
| 1.2552 | 4.86 | 5500 | 0.8959 | 0.2258 |
| 1.1442 | 5.3 | 6000 | 0.8326 | 0.2123 |
| 1.0747 | 5.74 | 6500 | 0.7708 | 0.2053 |
| 0.985 | 6.18 | 7000 | 0.7137 | 0.1864 |
| 0.921 | 6.63 | 7500 | 0.6822 | 0.1818 |
| 0.8817 | 7.07 | 8000 | 0.6435 | 0.1716 |
| 0.8043 | 7.51 | 8500 | 0.6338 | 0.1692 |
| 0.7938 | 7.95 | 9000 | 0.6075 | 0.1613 |
| 0.7296 | 8.39 | 9500 | 0.5844 | 0.1578 |
| 0.7061 | 8.83 | 10000 | 0.5695 | 0.1533 |
| 0.6566 | 9.28 | 10500 | 0.5695 | 0.1478 |
| 0.6452 | 9.72 | 11000 | 0.5346 | 0.1439 |
| 0.6178 | 10.16 | 11500 | 0.5184 | 0.1404 |
| 0.5887 | 10.6 | 12000 | 0.5152 | 0.1360 |
| 0.5739 | 11.04 | 12500 | 0.5062 | 0.1356 |
| 0.5338 | 11.48 | 13000 | 0.5135 | 0.1321 |
| 0.5391 | 11.93 | 13500 | 0.5021 | 0.1316 |
| 0.4964 | 12.37 | 14000 | 0.4924 | 0.1269 |
| 0.4959 | 12.81 | 14500 | 0.4860 | 0.1262 |
| 0.4731 | 13.25 | 15000 | 0.4893 | 0.1227 |
| 0.4651 | 13.69 | 15500 | 0.4718 | 0.1204 |
| 0.4446 | 14.13 | 16000 | 0.4815 | 0.1180 |
| 0.4175 | 14.58 | 16500 | 0.4780 | 0.1189 |
| 0.4249 | 15.02 | 17000 | 0.4678 | 0.1163 |
| 0.4073 | 15.46 | 17500 | 0.4599 | 0.1141 |
| 0.3948 | 15.9 | 18000 | 0.4676 | 0.1136 |
| 0.3795 | 16.34 | 18500 | 0.4656 | 0.1119 |
| 0.3807 | 16.78 | 19000 | 0.4642 | 0.1100 |
| 0.3675 | 17.23 | 19500 | 0.4661 | 0.1108 |
| 0.3609 | 17.67 | 20000 | 0.4589 | 0.1086 |
| 0.3454 | 18.11 | 20500 | 0.4645 | 0.1088 |
| 0.3451 | 18.55 | 21000 | 0.4570 | 0.1076 |
| 0.3496 | 18.99 | 21500 | 0.4555 | 0.1072 |
| 0.3327 | 19.43 | 22000 | 0.4619 | 0.1075 |
| 0.334 | 19.88 | 22500 | 0.4593 | 0.1067 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liminerity/Ingot-7b-slerp-6
|
liminerity
| 2024-01-24T00:01:41Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp-2",
"liminerity/Ingot-7b-slerp-4",
"base_model:liminerity/Ingot-7b-slerp-2",
"base_model:merge:liminerity/Ingot-7b-slerp-2",
"base_model:liminerity/Ingot-7b-slerp-4",
"base_model:merge:liminerity/Ingot-7b-slerp-4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T23:51:06Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp-2
- liminerity/Ingot-7b-slerp-4
base_model:
- liminerity/Ingot-7b-slerp-2
- liminerity/Ingot-7b-slerp-4
---
# Ingot-7b-slerp-6
Ingot-7b-slerp-6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp-2](https://huggingface.co/liminerity/Ingot-7b-slerp-2)
* [liminerity/Ingot-7b-slerp-4](https://huggingface.co/liminerity/Ingot-7b-slerp-4)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp-2
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp-4
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp-2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-6"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
IB13/bloom-560m_reward_model_ps_right
|
IB13
| 2024-01-23T23:59:23Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-20T16:17:18Z |
---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
model-index:
- name: bloom-560m_reward_model_ps_right
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m_reward_model_ps_right
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
h2m/BurningBruce-004-4x7b
|
h2m
| 2024-01-23T23:53:24Z | 15 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"arxiv:2101.03961",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T09:51:48Z |

# A frankenMoE of 4 merged models
BurningBruce is a codename given to models created by members of Convex. Our purpose is to try our hand at making the most well-rounded models possible without the hassle of building and maintaining hundreds of thousands of dollars' worth of equipment.
We will be sending Bruce through many different iterations, hopefully each one improving upon the last.
The mergekit config can be found in the files.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of โexpertsโ (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token โMoreโ is sent to the second expert, and the token "Parametersโ is sent to the first network. As weโll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the โexpertsโ) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but theyโve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), weโll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? Thatโs because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously.
|
AISimplyExplained/Vakil-7B
|
AISimplyExplained
| 2024-01-23T23:42:47Z | 1,511 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"legal",
"en",
"dataset:AISimplyExplained/LegalReasoningIndianLaw",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-23T12:50:25Z |
---
license: mit
datasets:
- AISimplyExplained/LegalReasoningIndianLaw
language:
- en
library_name: transformers
tags:
- legal
inference: false
---
# Vakil-7B Model Card
### Model Description
Vakil-7B is a state-of-the-art language model fine-tuned on the `AISimplyExplained/LegalReasoningIndianLaw` dataset for specialization in the nuances and complexities of Indian law. It is designed to provide legal professionals, students, and researchers with insights and assistance in understanding legal documents and queries within the context of the Indian legal system.
Developed by Asmi Gulati and Bhuvi Jain, this tool aims to enhance the accessibility and analysis of legal texts, driving forward the digital transformation in the legal domain.
### Model Specifications
- **Developed by:** Asmi Gulati and Bhuvi Jain
- **Model type:** Fine-tuned language model
- **Language(s) (NLP):** English, with a focus on Indian legal terminology
- **License:** MIT
- **Finetuned from model:** `transformers` library model
## Directions for Usage
```python
!pip install "unsloth[colab_ampere] @ git+https://github.com/unslothai/unsloth.git"
!pip install "git+https://github.com/huggingface/transformers.git"
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("AISimplyExplained/Vakil-7B")
model = AutoModelForCausalLM.from_pretrained("AISimplyExplained/Vakil-7B")
```
### Intended Use
Vakil-7B is intended for direct use by legal professionals and researchers who need to interact with Indian legal text. It is designed to assist with legal research, drafting, and education by providing AI-driven analysis and insights.
### Out-of-Scope Use
Vakil-7B is not designed to replace professional legal advice or to be used as a standalone decision-making tool. It should be used as an aid in the legal research and analysis process, not as the sole source of guidance.
## Bias, Risks, and Limitations
Users should be aware of the inherent limitations of AI in interpreting legal text. Vakil-7B, while sophisticated, may not capture all nuances and should be used in conjunction with professional judgment.
|
Cathaysa/sst2-es-mt
|
Cathaysa
| 2024-01-23T23:36:58Z | 91 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T16:54:06Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sst2-es-mt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2-es-mt
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3469
- Accuracy: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3117 | 1.0 | 6735 | 0.3145 | 0.9053 |
| 0.2296 | 2.0 | 13470 | 0.3453 | 0.9214 |
| 0.1642 | 3.0 | 20205 | 0.3469 | 0.9264 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liminerity/Ingot-7b-slerp-5
|
liminerity
| 2024-01-23T23:29:29Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp",
"liminerity/Ingot-7b-slerp-3",
"base_model:liminerity/Ingot-7b-slerp",
"base_model:merge:liminerity/Ingot-7b-slerp",
"base_model:liminerity/Ingot-7b-slerp-3",
"base_model:merge:liminerity/Ingot-7b-slerp-3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T23:20:28Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp
- liminerity/Ingot-7b-slerp-3
base_model:
- liminerity/Ingot-7b-slerp
- liminerity/Ingot-7b-slerp-3
---
# Ingot-7b-slerp-5
Ingot-7b-slerp-5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp](https://huggingface.co/liminerity/Ingot-7b-slerp)
* [liminerity/Ingot-7b-slerp-3](https://huggingface.co/liminerity/Ingot-7b-slerp-3)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp-3
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-5"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
RohanHBTU/bart-large-finetuned-question-to-answer
|
RohanHBTU
| 2024-01-23T23:24:13Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T19:14:16Z |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: bart-large-finetuned-question-to-answer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-question-to-answer
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1153
- Bleu: 42.8973
- Gen Len: 18.69
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.8366 | 1.0 | 516 | 0.3882 | 32.192 | 18.8467 |
| 0.7567 | 2.0 | 1032 | 0.3263 | 34.6627 | 18.8333 |
| 0.6634 | 3.0 | 1548 | 0.2838 | 34.3455 | 18.8567 |
| 0.587 | 4.0 | 2064 | 0.2207 | 37.4365 | 18.8467 |
| 0.5178 | 5.0 | 2580 | 0.2778 | 36.1141 | 19.2267 |
| 0.4555 | 6.0 | 3096 | 0.1872 | 39.1633 | 18.6967 |
| 0.4137 | 7.0 | 3612 | 0.1854 | 39.3042 | 18.98 |
| 0.3672 | 8.0 | 4128 | 0.1543 | 40.8359 | 18.68 |
| 0.331 | 9.0 | 4644 | 0.1548 | 41.0895 | 18.54 |
| 0.3056 | 10.0 | 5160 | 0.1599 | 42.3384 | 18.6767 |
| 0.2762 | 11.0 | 5676 | 0.1508 | 41.1395 | 18.8167 |
| 0.2533 | 12.0 | 6192 | 0.1224 | 42.1233 | 18.7033 |
| 0.2332 | 13.0 | 6708 | 0.1195 | 42.8086 | 18.6967 |
| 0.2209 | 14.0 | 7224 | 0.1158 | 43.0663 | 18.72 |
| 0.21 | 15.0 | 7740 | 0.1153 | 42.8973 | 18.69 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Abhinav28/large-v3-hi-commonvoice-11-peft-trained-adapter-withfp16-50-percent
|
Abhinav28
| 2024-01-23T23:22:34Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2024-01-23T23:22:23Z |
---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
hollandpleskac/my_awesome_opus_books_model
|
hollandpleskac
| 2024-01-23T23:20:49Z | 98 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T22:47:08Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5666
- Bleu: 6.0755
- Gen Len: 17.5677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.776 | 1.0 | 6355 | 1.5820 | 5.9716 | 17.5761 |
| 1.7617 | 2.0 | 12710 | 1.5666 | 6.0755 | 17.5677 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
am-infoweb/rap_phase2_22jan_8i_v1
|
am-infoweb
| 2024-01-23T23:10:50Z | 92 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-23T17:13:29Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_22jan_8i_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_22jan_8i_v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.1006 | 1.0 | 12024 | 0.0713 |
| 0.026 | 2.0 | 24048 | 0.0370 |
| 0.0404 | 3.0 | 36072 | 0.0359 |
| 0.0288 | 4.0 | 48096 | 0.0100 |
| 0.0131 | 5.0 | 60120 | 0.0152 |
| 0.0181 | 6.0 | 72144 | 0.0067 |
| 0.0156 | 7.0 | 84168 | 0.0031 |
| 0.0 | 8.0 | 96192 | 0.0038 |
| 0.0 | 9.0 | 108216 | 0.0043 |
| 0.0006 | 10.0 | 120240 | 0.0041 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
A-Bar/BioMedNLP_DeBERTa_all_updates
|
A-Bar
| 2024-01-23T23:03:32Z | 93 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2024_task_2",
"base_model:hongpingjun98/BioMedNLP_DeBERTa",
"base_model:finetune:hongpingjun98/BioMedNLP_DeBERTa",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T17:13:36Z |
---
license: mit
base_model: hongpingjun98/BioMedNLP_DeBERTa
tags:
- generated_from_trainer
datasets:
- sem_eval_2024_task_2
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: BioMedNLP_DeBERTa_all_updates
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sem_eval_2024_task_2
type: sem_eval_2024_task_2
config: sem_eval_2024_task_2_source
split: validation
args: sem_eval_2024_task_2_source
metrics:
- name: Accuracy
type: accuracy
value: 0.705
- name: Precision
type: precision
value: 0.7238235615241838
- name: Recall
type: recall
value: 0.7050000000000001
- name: F1
type: f1
value: 0.6986644194182692
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioMedNLP_DeBERTa_all_updates
This model is a fine-tuned version of [hongpingjun98/BioMedNLP_DeBERTa](https://huggingface.co/hongpingjun98/BioMedNLP_DeBERTa) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1863
- Accuracy: 0.705
- Precision: 0.7238
- Recall: 0.7050
- F1: 0.6987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.4238 | 1.0 | 116 | 0.6639 | 0.665 | 0.6678 | 0.665 | 0.6636 |
| 0.4316 | 2.0 | 232 | 0.6644 | 0.68 | 0.6875 | 0.6800 | 0.6768 |
| 0.3819 | 3.0 | 348 | 0.7328 | 0.71 | 0.7188 | 0.71 | 0.7071 |
| 0.3243 | 4.0 | 464 | 0.9162 | 0.7 | 0.7083 | 0.7 | 0.6970 |
| 0.4053 | 5.0 | 580 | 0.7145 | 0.715 | 0.7214 | 0.7150 | 0.7129 |
| 0.2548 | 6.0 | 696 | 1.0598 | 0.69 | 0.7016 | 0.69 | 0.6855 |
| 0.3455 | 7.0 | 812 | 0.7782 | 0.72 | 0.7232 | 0.72 | 0.7190 |
| 0.2177 | 8.0 | 928 | 1.1182 | 0.69 | 0.6950 | 0.69 | 0.6880 |
| 0.2304 | 9.0 | 1044 | 1.4332 | 0.695 | 0.708 | 0.695 | 0.6902 |
| 0.2103 | 10.0 | 1160 | 1.2736 | 0.7 | 0.7198 | 0.7 | 0.6931 |
| 0.1748 | 11.0 | 1276 | 1.2654 | 0.675 | 0.6816 | 0.675 | 0.6720 |
| 0.1608 | 12.0 | 1392 | 1.8885 | 0.63 | 0.6689 | 0.63 | 0.6074 |
| 0.1082 | 13.0 | 1508 | 1.7004 | 0.68 | 0.7005 | 0.6800 | 0.6716 |
| 0.1074 | 14.0 | 1624 | 1.8145 | 0.67 | 0.6804 | 0.67 | 0.6652 |
| 0.0238 | 15.0 | 1740 | 1.7608 | 0.68 | 0.6931 | 0.68 | 0.6745 |
| 0.038 | 16.0 | 1856 | 1.9937 | 0.67 | 0.6953 | 0.6700 | 0.6589 |
| 0.0365 | 17.0 | 1972 | 2.1871 | 0.675 | 0.6964 | 0.675 | 0.6659 |
| 0.0144 | 18.0 | 2088 | 2.1093 | 0.695 | 0.7059 | 0.6950 | 0.6909 |
| 0.0014 | 19.0 | 2204 | 2.1559 | 0.695 | 0.7103 | 0.6950 | 0.6893 |
| 0.0324 | 20.0 | 2320 | 2.1863 | 0.705 | 0.7238 | 0.7050 | 0.6987 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
liminerity/Ingot-7b-slerp-3
|
liminerity
| 2024-01-23T22:47:06Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"flemmingmiguel/MBX-7B",
"liminerity/Ingot-7b-slerp",
"base_model:flemmingmiguel/MBX-7B",
"base_model:merge:flemmingmiguel/MBX-7B",
"base_model:liminerity/Ingot-7b-slerp",
"base_model:merge:liminerity/Ingot-7b-slerp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T22:39:16Z |
---
tags:
- merge
- mergekit
- lazymergekit
- flemmingmiguel/MBX-7B
- liminerity/Ingot-7b-slerp
base_model:
- flemmingmiguel/MBX-7B
- liminerity/Ingot-7b-slerp
---
# Ingot-7b-slerp-3
Ingot-7b-slerp-3 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B)
* [liminerity/Ingot-7b-slerp](https://huggingface.co/liminerity/Ingot-7b-slerp)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: flemmingmiguel/MBX-7B
layer_range: [0, 32]
- model: liminerity/Ingot-7b-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: flemmingmiguel/MBX-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
KukuruJPN/Oliver_Atom
|
KukuruJPN
| 2024-01-23T22:46:09Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-23T22:39:49Z |
---
license: other
license_name: msamsm
license_link: LICENSE
---
|
liminerity/Ingot-7b-slerp-2
|
liminerity
| 2024-01-23T22:30:03Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Ingot-7b-slerp",
"flemmingmiguel/MBX-7B",
"base_model:flemmingmiguel/MBX-7B",
"base_model:merge:flemmingmiguel/MBX-7B",
"base_model:liminerity/Ingot-7b-slerp",
"base_model:merge:liminerity/Ingot-7b-slerp",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T22:21:42Z |
---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Ingot-7b-slerp
- flemmingmiguel/MBX-7B
base_model:
- liminerity/Ingot-7b-slerp
- flemmingmiguel/MBX-7B
---
# Ingot-7b-slerp-2
Ingot-7b-slerp-2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Ingot-7b-slerp](https://huggingface.co/liminerity/Ingot-7b-slerp)
* [flemmingmiguel/MBX-7B](https://huggingface.co/flemmingmiguel/MBX-7B)
## ๐งฉ Configuration
```yaml
slices:
- sources:
- model: liminerity/Ingot-7b-slerp
layer_range: [0, 32]
- model: flemmingmiguel/MBX-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Ingot-7b-slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "liminerity/Ingot-7b-slerp-2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Nerdofdot/Nerdofdot_nickprock_mmarco-bert-base-italian-uncased_TM_FTM
|
Nerdofdot
| 2024-01-23T22:20:22Z | 47 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-23T22:20:03Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 0.4}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2392,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
oGabrielFreitas/roberta-teste
|
oGabrielFreitas
| 2024-01-23T22:17:47Z | 102 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-23T15:16:29Z |
---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: roberta-teste
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-teste
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear
|
Weyaxi
| 2024-01-23T22:16:17Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B",
"base_model:merge:Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:merge:meta-math/MetaMath-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-05T10:52:55Z |
---
license: apache-2.0
tags:
- merge
base_model:
- meta-math/MetaMath-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
---
# MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear
This is the model for MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear. I used [mergekit](https://github.com/cg123/mergekit) to merge models.
# Yaml Config
```yaml
models:
- model: meta-math/MetaMath-Mistral-7B
parameters:
weight: 0.5
- model: Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B
parameters:
weight: 0.3
merge_method: linear
dtype: float16
```
|
DaRkSpyro/HiccupHowToTrainYourDragon
|
DaRkSpyro
| 2024-01-23T22:12:46Z | 0 | 0 |
flair
|
[
"flair",
"music",
"en",
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T22:11:44Z |
---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
---
|
Weyaxi/Einstein-openchat-7B
|
Weyaxi
| 2024-01-23T22:12:30Z | 1,329 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T22:03:13Z |
---
license: other
---
# Einstein-openchat-7B
Thiss is the model of [Einstein-openchat-7B](huggingface.co/Weyaxi/Einstein-openchat-7B).
Lora merge https://huggingface.co/Weyaxi/Einstein-7B with https://huggingface.co/openchat/openchat-3.5-0106
|
Shijia/flan_biomedidal
|
Shijia
| 2024-01-23T22:11:18Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:sem_eval_2024_task_2",
"base_model:Shijia/run1",
"base_model:finetune:Shijia/run1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-23T21:25:00Z |
---
license: apache-2.0
base_model: Shijia/run1
tags:
- generated_from_trainer
datasets:
- sem_eval_2024_task_2
metrics:
- accuracy
model-index:
- name: flan_biomedidal
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: sem_eval_2024_task_2
type: sem_eval_2024_task_2
config: sem_eval_2024_task_2_source
split: validation
args: sem_eval_2024_task_2_source
metrics:
- name: Accuracy
type: accuracy
value: 0.5
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan_biomedidal
This model is a fine-tuned version of [Shijia/run1](https://huggingface.co/Shijia/run1) on the sem_eval_2024_task_2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3473
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 58 | 0.3479 | 0.5 |
| 0.3671 | 2.0 | 116 | 0.3496 | 0.5 |
| 0.3671 | 3.0 | 174 | 0.3486 | 0.5 |
| 0.37 | 4.0 | 232 | 0.3477 | 0.5 |
| 0.37 | 5.0 | 290 | 0.3473 | 0.5 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
onarganogun/videomae-large-fight_22-01-2024
|
onarganogun
| 2024-01-23T22:08:59Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-large",
"base_model:finetune:MCG-NJU/videomae-large",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-23T17:28:56Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: videomae-large-fight_22-01-2024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-large-fight_22-01-2024
This model is a fine-tuned version of [MCG-NJU/videomae-large](https://huggingface.co/MCG-NJU/videomae-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6263
- Accuracy: 0.8565
- Precision: 0.8502
- Recall: 0.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 9080
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|
| 0.6582 | 0.05 | 454 | 0.6970 | 0.5695 | 0.5660 | 0.5964 |
| 0.6712 | 1.05 | 908 | 0.6281 | 0.6390 | 0.6202 | 0.7175 |
| 0.5664 | 2.05 | 1362 | 0.6718 | 0.6457 | 0.6555 | 0.6143 |
| 0.5645 | 3.05 | 1816 | 0.5835 | 0.7018 | 0.6974 | 0.7130 |
| 0.4259 | 4.05 | 2270 | 0.5497 | 0.7197 | 0.7402 | 0.6771 |
| 0.3542 | 5.05 | 2724 | 0.5509 | 0.7466 | 0.7434 | 0.7534 |
| 0.3676 | 6.05 | 3178 | 0.4956 | 0.7623 | 0.7532 | 0.7803 |
| 0.2656 | 7.05 | 3632 | 0.5263 | 0.7534 | 0.7811 | 0.7040 |
| 0.4675 | 8.05 | 4086 | 0.5216 | 0.7915 | 0.8009 | 0.7758 |
| 0.1434 | 9.05 | 4540 | 0.4744 | 0.8094 | 0.8136 | 0.8027 |
| 0.1389 | 10.05 | 4994 | 0.5389 | 0.8318 | 0.8274 | 0.8386 |
| 0.3228 | 11.05 | 5448 | 0.5345 | 0.8341 | 0.8599 | 0.7982 |
| 0.1044 | 12.05 | 5902 | 0.5729 | 0.8341 | 0.8465 | 0.8161 |
| 0.0305 | 13.05 | 6356 | 0.5812 | 0.8363 | 0.8378 | 0.8341 |
| 0.1256 | 14.05 | 6810 | 0.5806 | 0.8520 | 0.8489 | 0.8565 |
| 0.2735 | 15.05 | 7264 | 0.5713 | 0.8520 | 0.8618 | 0.8386 |
| 0.2376 | 16.05 | 7718 | 0.6030 | 0.8498 | 0.8578 | 0.8386 |
| 0.2978 | 17.05 | 8172 | 0.6263 | 0.8565 | 0.8502 | 0.8655 |
| 0.3872 | 18.05 | 8626 | 0.6099 | 0.8520 | 0.8489 | 0.8565 |
| 0.6629 | 19.05 | 9080 | 0.6142 | 0.8543 | 0.8496 | 0.8610 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
flavioegoncalves/speecht5_tts_portuguese
|
flavioegoncalves
| 2024-01-23T22:07:02Z | 39 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"tts",
"generated_from_trainer",
"pt",
"dataset:multilingual_librispeech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-01-20T20:38:36Z |
---
language:
- pt
license: mit
base_model: microsoft/speecht5_tts
tags:
- tts
- generated_from_trainer
datasets:
- multilingual_librispeech
model-index:
- name: SpeechT5 TTS Portuguese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Portuguese
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the MultilingualLibrispeech dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4091 | 1.68 | 1000 | 0.3728 |
| 0.3906 | 3.35 | 2000 | 0.3598 |
| 0.3899 | 5.03 | 3000 | 0.3543 |
| 0.3842 | 6.71 | 4000 | 0.3518 |
| 0.376 | 8.38 | 5000 | 0.3492 |
| 0.3745 | 10.06 | 6000 | 0.3474 |
| 0.3773 | 11.74 | 7000 | 0.3473 |
| 0.3774 | 13.41 | 8000 | 0.3461 |
| 0.3719 | 15.09 | 9000 | 0.3454 |
| 0.3712 | 16.76 | 10000 | 0.3452 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
OdiaGenAI/odiagenAI-bengali-base-model-v1
|
OdiaGenAI
| 2024-01-23T22:05:10Z | 62 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"bn",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T18:55:19Z |
---
license: cc-by-nc-4.0
language:
- bn
---
# Model Card for Model ID
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Model description
odiagenAI-bengali-base-model-v1 is based on Llama-7b and finetuned with 252k Bengali instruction set. The instruction set is translated data from open-source resources, resulting in good Bengali instruction understanding and response generation capabilities.
The code of Bengali data generation and other detailed information can be found in our Github project repository: https://github.com/OdiaGenAI/GenerativeAI_and_LLM_Odia.
## Training hyper-parameters
| Parameter | Value |
| ------ | ------ |
| Batch size | 128 |
| Learning rate | 3e-4 |
| Epochs | 5 |
|Cutoff length | 256 |
|Weight_decay | 0.001 |
|Warmup_rate | 0.1 |
|LR_scheduler | linear |
|Lora r | 16 |
|Lora target modules | (q_proj, k_proj, v_proj, o_proj) |
Instructions for running it can be found at https://github.com/OdiaGenAI/GenerativeAI_and_LLM_Odia.
### Licensing Information
This work is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg
### Citation Information
If you find this helpful repository, please consider giving ๐ and citing:
```
@misc{OdiaGenAI-Bengali-LLM,
author = {Shantipriya Parida and Sambit Sekhar and Guneet Singh Kohli and Arghyadeep Sen and Shashikanta Sahoo},
title = {Bengali Instruction-Tuning Model},
year = {2023},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/OdiaGenAI}},
}
```
### Contributions
- Shantipriya Parida
- Sambit Sekhar
- Guneet Singh Kohli
- Arghyadeep Sen
- Shashikanta Sahoo
|
Mihaiii/stablelm-zephyr-3b_onnx
|
Mihaiii
| 2024-01-23T21:59:23Z | 1 | 0 |
transformers
|
[
"transformers",
"onnx",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-23T21:32:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mayflowergmbh/DiscoPhoenix-7B
|
mayflowergmbh
| 2024-01-23T21:41:30Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DiscoResearch/DiscoLM_German_7b_v1",
"DRXD1000/Phoenix",
"OpenPipe/mistral-ft-optimized-1227",
"base_model:DRXD1000/Phoenix-7B",
"base_model:merge:DRXD1000/Phoenix-7B",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:merge:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:OpenPipe/mistral-ft-optimized-1227",
"base_model:merge:OpenPipe/mistral-ft-optimized-1227",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T21:33:42Z |
---
tags:
- merge
- mergekit
- lazymergekit
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- OpenPipe/mistral-ft-optimized-1227
base_model:
- DiscoResearch/DiscoLM_German_7b_v1
- DRXD1000/Phoenix
- OpenPipe/mistral-ft-optimized-1227
---
# DiscoPhoenix-7B
DiscoPhoenix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
## ๐งฉ Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: DiscoResearch/DiscoLM_German_7b_v1
parameters:
density: 0.6
weight: 0.3
- model: DRXD1000/Phoenix
parameters:
density: 0.6
weight: 0.3
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayflowergmbh/DiscoPhoenix-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nisso22/llama
|
nisso22
| 2024-01-23T21:36:59Z | 72 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T21:35:52Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
deadniell/xokas_2024_nuevo_microfono_v1
|
deadniell
| 2024-01-23T21:34:04Z | 0 | 1 | null |
[
"streamer",
"twitch",
"espaรฑa",
"elxokas",
"xokas",
"es",
"license:openrail",
"region:us"
] | null | 2024-01-23T21:31:30Z |
---
license: openrail
language:
- es
tags:
- streamer
- twitch
- espaรฑa
- elxokas
- xokas
---
|
adaca001/clasificador-muchocine
|
adaca001
| 2024-01-23T21:28:19Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"safetensors",
"electra",
"classification",
"generated_from_trainer",
"en",
"dataset:muchocine",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:adapter:mrm8488/electricidad-base-discriminator",
"region:us"
] | null | 2024-01-23T20:13:29Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
datasets:
- muchocine
language:
- en
library_name: adapter-transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4141
- Accuracy: 0.3639
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.5202 | 0.3381 |
| 1.5131 | 2.0 | 776 | 1.4459 | 0.3394 |
| 1.3789 | 3.0 | 1164 | 1.4141 | 0.3639 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Arnie936/corgy_Hadl_LoRA
|
Arnie936
| 2024-01-23T21:27:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-23T21:27:03Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK person
license: openrail++
---
# SDXL LoRA DreamBooth - Arnie936/corgy_Hadl_LoRA
<Gallery />
## Model description
These are Arnie936/corgy_Hadl_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Arnie936/corgy_Hadl_LoRA/tree/main) them in the Files & versions tab.
|
onnxruntime/sdxl-turbo
|
onnxruntime
| 2024-01-23T21:24:57Z | 0 | 2 | null |
[
"onnx",
"stable-diffusion",
"sdxl",
"onnxruntime",
"text-to-image",
"en",
"base_model:stabilityai/sdxl-turbo",
"base_model:quantized:stabilityai/sdxl-turbo",
"license:other",
"region:us"
] |
text-to-image
| 2024-01-19T22:43:11Z |
---
pipeline_tag: text-to-image
license: other
license_name: sai-nc-community
license_link: https://huggingface.co/stabilityai/sdxl-turbo/blob/main/LICENSE.TXT
base_model: stabilityai/sdxl-turbo
language:
- en
tags:
- stable-diffusion
- sdxl
- onnxruntime
- onnx
- text-to-image
---
# Stable Diffusion XL Turbo for ONNX Runtime CUDA
## Introduction
This repository hosts the optimized onnx models of **SDXL Turbo** to accelerate inference with ONNX Runtime CUDA execution provider for Nvidia GPUs. It cannot run in other providers like CPU or DirectML.
The models are generated by [Olive](https://github.com/microsoft/Olive/tree/main/examples/stable_diffusion) with command like the following:
```
python stable_diffusion_xl.py --provider cuda --model_id stabilityai/sdxl-turbo --optimize --use_fp16_fixed_vae
```
See the [usage instructions](#usage-example) for how to run the SDXL pipeline with the ONNX files hosted in this repository.
## Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [STABILITY AI NON-COMMERCIAL RESEARCH COMMUNITY LICENSE](https://huggingface.co/stabilityai/sd-turbo/blob/main/LICENSE)
- **Model Description:** This is a conversion of the [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
The VAE decoder is converted from [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). There are slight discrepancies between its output and that of the original VAE, but the decoded images should be [close enough for most purposes](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7#64c5c0f8e2e5c94bd04eaa80).
## Usage Example
Following the [demo instructions](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md#run-demo-with-docker). Example steps:
0. Install nvidia-docker using these [instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
1. Clone onnxruntime repository.
```shell
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime
```
2. Download the SDXL ONNX files from this repo
```shell
git lfs install
git clone https://huggingface.co/tlwu/sdxl-turbo-onnxruntime
```
3. Launch the docker
```shell
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
```
4. Build ONNX Runtime from source
```shell
export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
git config --global --add safe.directory '*'
sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
--cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
--use_tensorrt --tensorrt_home /usr/src/tensorrt \
--cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 \
--allow_running_as_root
python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*-cp310-cp310-linux_x86_64.whl --force-reinstall
```
If the GPU is not A100, change CMAKE_CUDA_ARCHITECTURES=80 in the command line according to the GPU compute capacity (like 89 for RTX 4090, or 86 for RTX 3090). If your machine has less than 64GB memory, replace --parallel by --parallel 4 --nvcc_threads 1 to avoid out of memory.
5. Install libraries and requirements
```shell
python3 -m pip install --upgrade pip
cd /workspace/onnxruntime/python/tools/transformers/models/stable_diffusion
python3 -m pip install -r requirements-cuda12.txt
python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
```
6. Perform ONNX Runtime optimized inference
```shell
python3 demo_txt2img_xl.py \
"starry night over Golden Gate Bridge by van gogh" \
--version xl-turbo \
--engine-dir /workspace/sdxl-turbo-onnxruntime
```
|
deadniell/fade_valorant_latam_v3
|
deadniell
| 2024-01-23T21:19:12Z | 0 | 1 | null |
[
"valorant",
"espaรฑol latino",
"riot games",
"es",
"license:openrail",
"region:us"
] | null | 2024-01-23T21:17:39Z |
---
license: openrail
language:
- es
tags:
- valorant
- espaรฑol latino
- riot games
---
|
ZiHDeng/peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw
|
ZiHDeng
| 2024-01-23T21:16:53Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigcode/starcoderbase-1b",
"base_model:adapter:bigcode/starcoderbase-1b",
"license:bigcode-openrail-m",
"region:us"
] | null | 2024-01-22T07:58:03Z |
---
license: bigcode-openrail-m
library_name: peft
tags:
- generated_from_trainer
base_model: bigcode/starcoderbase-1b
model-index:
- name: peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# peft-lora-starcoder1B-v2-personal-copilot-A100-40GB-yfw
This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0536
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3079 | 0.05 | 100 | 0.2772 |
| 0.1424 | 0.1 | 200 | 0.1385 |
| 0.106 | 0.15 | 300 | 0.1041 |
| 0.0751 | 0.2 | 400 | 0.0858 |
| 0.0621 | 0.25 | 500 | 0.0757 |
| 0.0536 | 0.3 | 600 | 0.0673 |
| 0.053 | 0.35 | 700 | 0.0667 |
| 0.0497 | 0.4 | 800 | 0.0625 |
| 0.0437 | 0.45 | 900 | 0.0605 |
| 0.0488 | 0.5 | 1000 | 0.0561 |
| 0.037 | 0.55 | 1100 | 0.0576 |
| 0.0394 | 0.6 | 1200 | 0.0518 |
| 0.033 | 0.65 | 1300 | 0.0538 |
| 0.0367 | 0.7 | 1400 | 0.0495 |
| 0.0306 | 0.75 | 1500 | 0.0510 |
| 0.0347 | 0.8 | 1600 | 0.0505 |
| 0.0259 | 0.85 | 1700 | 0.0502 |
| 0.0294 | 0.9 | 1800 | 0.0501 |
| 0.0256 | 0.95 | 1900 | 0.0539 |
| 0.0278 | 1.0 | 2000 | 0.0536 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/code-millenials-34b-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-23T21:12:09Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:58:11Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
yargpt/Vikhr-7b-0.1-GGUF
|
yargpt
| 2024-01-23T21:07:31Z | 4 | 2 |
transformers
|
[
"transformers",
"gguf",
"GGUF",
"text-generation",
"ru",
"en",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-23T20:53:04Z |
---
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
inference: false
language:
- ru
- en
tags:
- GGUF
---
<h1>yargpt/Vikhr-7b-0.1-gguf</h1>
<li>
Original model: <a href="https://huggingface.co/Vikhrmodels/Vikhr-7b-0.1"> Vikhrmodels/Vikhr-7b-0.1</a>
</li>
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). This is the source project for GGUF, providing both a Command Line Interface (CLI) and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), Known as the most widely used web UI, this project boasts numerous features and powerful extensions, and supports GPU acceleration.
* [Ollama](https://github.com/jmorganca/ollama) Ollama is a lightweight and extensible framework designed for building and running language models locally. It features a simple API for creating, managing, and executing models, along with a library of pre-built models for use in various applicationsโ
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), A comprehensive web UI offering GPU acceleration across all platforms and architectures, particularly renowned for storytelling.
* [GPT4All](https://gpt4all.io), This is a free and open source GUI that runs locally, supporting Windows, Linux, and macOS with full GPU acceleration.
* [LM Studio](https://lmstudio.ai/) An intuitive and powerful local GUI for Windows and macOS (Silicon), featuring GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). A notable web UI with a variety of unique features, including a comprehensive model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), An attractive, user-friendly character-based chat GUI for Windows and macOS (both Silicon and Intel), also offering GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), A Python library equipped with GPU acceleration, LangChain support, and an OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), A Rust-based ML framework focusing on performance, including GPU support, and designed for ease of use.
* [ctransformers](https://github.com/marella/ctransformers), A Python library featuring GPU acceleration, LangChain support, and an OpenAI-compatible AI server.
* [localGPT](https://github.com/PromtEngineer/localGPT) An open-source initiative enabling private conversations with documents.
<!-- README_GGUF.md-about-gguf end -->
<!-- compatibility_gguf start -->
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: yargpt/Vikhr-7b-0.1-gguf and below it, a specific filename to download, such as: yargpt/Vikhr-7b-0.1-gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download yargpt/Vikhr-7b-0.1-gguf yargpt/Vikhr-7b-0.1-gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download yargpt/Vikhr-7b-0.1-gguf --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download yargpt/Vikhr-7b-0.1-gguf yargpt/Vikhr-7b-0.1-gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
|
CLMBR/npi-only-transformer-1
|
CLMBR
| 2024-01-23T21:00:38Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:29:46Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.224 | 0.03 | 76320 | 4.1960 |
| 4.021 | 1.03 | 152640 | 4.0259 |
| 3.9128 | 0.03 | 228960 | 3.9511 |
| 3.8414 | 1.03 | 305280 | 3.9098 |
| 3.7915 | 0.03 | 381600 | 3.8843 |
| 3.7489 | 1.03 | 457920 | 3.8684 |
| 3.7172 | 0.03 | 534240 | 3.8578 |
| 3.6886 | 1.03 | 610560 | 3.8503 |
| 3.6592 | 0.03 | 686880 | 3.8463 |
| 3.635 | 1.03 | 763200 | 3.8440 |
| 3.6089 | 0.03 | 839520 | 3.8414 |
| 3.5858 | 1.03 | 915840 | 3.8406 |
| 3.5679 | 0.03 | 992160 | 3.8411 |
| 3.5481 | 1.03 | 1068480 | 3.8400 |
| 3.5304 | 0.03 | 1144800 | 3.8423 |
| 3.5273 | 1.03 | 1221120 | 3.8431 |
| 3.5084 | 0.03 | 1297440 | 3.8436 |
| 3.4931 | 1.03 | 1373760 | 3.8460 |
| 3.4817 | 0.03 | 1450080 | 3.8460 |
| 3.4695 | 1.03 | 1526400 | 3.8482 |
| 3.4604 | 0.03 | 1602720 | 3.8497 |
| 3.451 | 0.03 | 1679040 | 3.8507 |
| 3.4443 | 1.03 | 1755360 | 3.8523 |
| 3.4359 | 0.03 | 1831680 | 3.8535 |
| 3.4238 | 1.03 | 1908000 | 3.8556 |
| 3.4097 | 0.03 | 1984320 | 3.8569 |
| 3.3949 | 1.03 | 2060640 | 3.8573 |
| 3.3833 | 0.03 | 2136960 | 3.8597 |
| 3.373 | 1.03 | 2213280 | 3.8602 |
| 3.3626 | 0.03 | 2289600 | 3.8611 |
| 3.3495 | 1.03 | 2365920 | 3.8634 |
| 3.3497 | 0.03 | 2442240 | 3.8635 |
| 3.3351 | 1.03 | 2518560 | 3.8644 |
| 3.3289 | 0.03 | 2594880 | 3.8649 |
| 3.3182 | 1.03 | 2671200 | 3.8660 |
| 3.3091 | 0.03 | 2747520 | 3.8667 |
| 3.3031 | 1.03 | 2823840 | 3.8655 |
| 3.2978 | 0.03 | 2900160 | 3.8657 |
| 3.2938 | 0.03 | 2976480 | 3.8646 |
| 3.2916 | 0.02 | 3052726 | 3.8634 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LoneStriker/code-millenials-34b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T20:58:08Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:47:33Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
linoyts/2000_ads_offset_noise_3
|
linoyts
| 2024-01-23T20:54:15Z | 50 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-23T20:24:07Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_0.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_1.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_2.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: an ad in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/2000_ads_offset_noise_3
<Gallery />
## Model description
### These are linoyts/2000_ads_offset_noise_3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`2000_ads_offset_noise_3.safetensors` here ๐พ](/linoyts/2000_ads_offset_noise_3/blob/main/2000_ads_offset_noise_3.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:2000_ads_offset_noise_3:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`2000_ads_offset_noise_3_emb.safetensors` here ๐พ](/linoyts/2000_ads_offset_noise_3/blob/main/2000_ads_offset_noise_3_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `2000_ads_offset_noise_3_emb` to your prompt. For example, `an ad in the style of 2000_ads_offset_noise_3_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/2000_ads_offset_noise_3', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/2000_ads_offset_noise_3', filename='2000_ads_offset_noise_3_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('<s0><s1> ad of a llama wearing headphones').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/2000_ads_offset_noise_3/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Felipo/dqn-SpaceInvadersNoFrameskip-v4
|
Felipo
| 2024-01-23T20:53:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T20:52:50Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 510.00 +/- 166.78
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Felipo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
mayflowergmbh/Hessian-Disco-Daredevil-7B
|
mayflowergmbh
| 2024-01-23T20:51:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DiscoResearch/DiscoLM_German_7b_v1",
"shadowml/Daredevil-7B",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:merge:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:shadowml/Daredevil-7B",
"base_model:merge:shadowml/Daredevil-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:44:00Z |
---
tags:
- merge
- mergekit
- lazymergekit
- DiscoResearch/DiscoLM_German_7b_v1
- shadowml/Daredevil-7B
base_model:
- DiscoResearch/DiscoLM_German_7b_v1
- shadowml/Daredevil-7B
---
# Hessian-Disco-Daredevil-7B
Hessian-Disco-Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [shadowml/Daredevil-7B](https://huggingface.co/shadowml/Daredevil-7B)
## ๐งฉ Configuration
```yaml
models:
- model: LeoLM/leo-mistral-hessianai-7b
# No parameters necessary for base model
- model: DiscoResearch/DiscoLM_German_7b_v1
parameters:
density: 0.62
weight: 0.55
- model: shadowml/Daredevil-7B
parameters:
density: 0.56
weight: 0.55
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
int8_mask: true
dtype: bfloat16
```
## ๐ป Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mayflowergmbh/Hessian-Disco-Daredevil-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Samanenayati/my-finetuned-bert
|
Samanenayati
| 2024-01-23T20:50:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:FacebookAI/roberta-large",
"base_model:adapter:FacebookAI/roberta-large",
"region:us"
] | null | 2024-01-22T22:25:50Z |
---
library_name: peft
base_model: roberta-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/code-millenials-34b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T20:47:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:38:38Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
longluu/distilbert-toxic-comment-classifier
|
longluu
| 2024-01-23T20:45:22Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-21T14:11:58Z |
---
license: mit
language:
- en
metrics:
- accuracy
---
# Toxic post classification using DistilBert
Use a pretrained DistilBert to train a classifier on the Toxic Comment dataset https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge.
The goal is to classify whether a comment is toxic or not. Note that the labels from the original datasets are more fine-grained (i.e. different types of toxicity).
The model here obatains a test accuracy of 95% on a balanced split.
|
LoneStriker/code-millenials-34b-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-23T20:38:36Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:30:18Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
vgorce/phi2-samsum
|
vgorce
| 2024-01-23T20:35:12Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-23T20:29:58Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi2-samsum
results: []
datasets:
- samsum
---
# phi2-samsum
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the [samsum dataset](https://huggingface.co/datasets/samsum).
It achieves the following results on the evaluation set:
- Loss: 2.2606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7371 | 0.0 | 25 | 2.5507 |
| 2.5853 | 0.01 | 50 | 2.3917 |
| 2.3176 | 0.01 | 75 | 2.3258 |
| 2.3459 | 0.01 | 100 | 2.3066 |
| 2.3003 | 0.02 | 125 | 2.2957 |
| 2.2767 | 0.02 | 150 | 2.2883 |
| 2.2637 | 0.02 | 175 | 2.2835 |
| 2.3387 | 0.03 | 200 | 2.2787 |
| 2.3151 | 0.03 | 225 | 2.2759 |
| 2.1807 | 0.03 | 250 | 2.2767 |
| 2.4122 | 0.04 | 275 | 2.2703 |
| 2.139 | 0.04 | 300 | 2.2680 |
| 2.3887 | 0.04 | 325 | 2.2664 |
| 2.2124 | 0.05 | 350 | 2.2648 |
| 2.2271 | 0.05 | 375 | 2.2649 |
| 2.3335 | 0.05 | 400 | 2.2634 |
| 2.2411 | 0.06 | 425 | 2.2628 |
| 2.4075 | 0.06 | 450 | 2.2619 |
| 2.3136 | 0.06 | 475 | 2.2615 |
| 2.2328 | 0.07 | 500 | 2.2606 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/code-millenials-34b-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T20:30:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T20:23:03Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
am-infoweb/rap_phase2_22jan_5i_v1
|
am-infoweb
| 2024-01-23T20:28:38Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-23T16:47:59Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_22jan_5i_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_22jan_5i_v1
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3647 | 1.0 | 5010 | 0.0825 |
| 0.0693 | 2.0 | 10020 | 0.0517 |
| 0.0228 | 3.0 | 15030 | 0.0656 |
| 0.0288 | 4.0 | 20040 | 0.0327 |
| 0.0387 | 5.0 | 25050 | 0.0448 |
| 0.0171 | 6.0 | 30060 | 0.0207 |
| 0.0136 | 7.0 | 35070 | 0.0163 |
| 0.0059 | 8.0 | 40080 | 0.0200 |
| 0.0062 | 9.0 | 45090 | 0.0243 |
| 0.0002 | 10.0 | 50100 | 0.0233 |
| 0.002 | 11.0 | 55110 | 0.0219 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
llmixer/BigWeave-v8-90b-4.0bpw-h8-exl2
|
llmixer
| 2024-01-23T20:23:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"4.0bpw",
"h8",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T19:23:36Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 4.0bpw
- h8
- exl2
---
Exllamav2 4.0bpw h8 quant for [BigWeave-v8-90b](https://huggingface.co/llmixer/BigWeave-v8-90b).
Calibration dataset: [llmixer/20k_random_data](https://huggingface.co/datasets/llmixer/20k_random_data)
|
nbeerbower/bruphin-alpha
|
nbeerbower
| 2024-01-23T20:15:15Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:finetune:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T01:46:29Z |
---
license: apache-2.0
base_model:
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- rwitz/go-bruins-v2
tags:
- merge
---
Simple linear merge of ehartford/dolphin-2.2.1-mistral-7b and rwitz/go-bruins-v2 using mergekit(yaml file is included).
|
graceneutrality/projkect5
|
graceneutrality
| 2024-01-23T20:08:39Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T20:08:30Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: projkect5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
paisanx/Reinforce-Pixelcopter-PLE-v3
|
paisanx
| 2024-01-23T20:08:11Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-23T20:08:08Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 41.00 +/- 23.69
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/code-millenials-34b-GGUF
|
LoneStriker
| 2024-01-23T20:07:48Z | 0 | 3 |
transformers
|
[
"transformers",
"gguf",
"code",
"license:llama2",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T18:47:17Z |
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.8048
verified: false
---
# Bud Code Millenials 34B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News ๐ฅ๐ฅ๐ฅ
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### ๐ Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction}
### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 16 A100 80GB for approximately 50hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 16 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 2157 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 16 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.
|
humung/polyglot-ko-12.8b-safetensors-8bit-50steps
|
humung
| 2024-01-23T20:00:56Z | 2 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-12.8B",
"base_model:adapter:beomi/KoAlpaca-Polyglot-12.8B",
"region:us"
] | null | 2024-01-18T06:42:21Z |
---
library_name: peft
base_model: beomi/KoAlpaca-Polyglot-12.8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Utshav/distilbert-base-uncased-finetuned-imdb
|
Utshav
| 2024-01-23T19:59:51Z | 90 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-23T19:49:36Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6959 | 1.0 | 157 | 2.5440 |
| 2.5693 | 2.0 | 314 | 2.4636 |
| 2.5434 | 3.0 | 471 | 2.4249 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sidushdid/ViT-base-patch16-BUSI-Mendeley-Lion
|
sidushdid
| 2024-01-23T19:56:43Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"biology",
"medical",
"ultrasound",
"breast cancer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-23T19:52:12Z |
---
license: mit
library_name: transformers
pipeline_tag: image-classification
tags:
- biology
- medical
- ultrasound
- breast cancer
---
|
sursani/Mistral-7B-v0.1-sft-ultrachat1000
|
sursani
| 2024-01-23T19:56:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-23T19:47:58Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- mistral
inference: false
---
## General Information
For this supervised fine-tuning, I am using:
* Mistral-7B-v0.1 LLM
* Datasets for loading a SFT dataset from the ๐ค hub, and preparing it for the model
* BitsandBytes and PEFT for fine-tuning the model on consumer hardware, leveraging [Q-LoRa](https://huggingface.co/blog/4bit-transformers-bitsandbytes), a technique which drastically reduces the compute requirements for fine-tuning
* TRL, a [library](https://huggingface.co/docs/trl/index) which includes useful Trainer classes for LLM fine-tuning.
|
DiscoResearch/DiscoLM_German_7b_v1
|
DiscoResearch
| 2024-01-23T19:55:46Z | 562 | 66 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Mistral",
"finetune",
"chatml",
"DPO",
"German",
"Deutsch",
"synthetic data",
"conversational",
"de",
"en",
"base_model:LeoLM/leo-mistral-hessianai-7b",
"base_model:finetune:LeoLM/leo-mistral-hessianai-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T13:43:08Z |
---
base_model: LeoLM/leo-mistral-hessianai-7b
tags:
- Mistral
- finetune
- chatml
- DPO
- German
- Deutsch
- synthetic data
model-index:
- name: DiscoLM_German_7b_v1
results: []
license: apache-2.0
language:
- de
- en
---
# DiscoLM German 7b v1

## Table of Contents
1. [Introduction](#introduction)
2. [Demo](#demo)
3. [Downloads](#Downloads)
4. [Prompt Format](#prompt-format)
5. [Results](#results)
6. [Evaluation](#evaluation)
7. [Dataset](#dataset)
8. [Limitations & Biases](#limitations--biases)
9. [Acknowledgements](#acknowledgements)
10. [About DiscoResearch](#about-discoresearch)
11. [Disclaimer](#disclaimer)
# Introduction
**DiscoLM German 7b** is a Mistral-based large language model with a focus on German-language applications and the successor of the [EM German](https://huggingface.co/jphme/em_german_leo_mistral) model family.
It was trained on a large dataset of instructions in German and English with a SFT finetuning phase followed by additional DPO reinforcement learning.
The model is optimized for German text, providing proficiency in understanding, generating, and interacting with German language content while preserving its fluency in English and excelling at translation tasks.
Our goal with Disco LM German was not to beat benchmarks, but to provide a robust and reliable model for everyday use that can serve as a drop-in replacement for ChatGPT and other proprietary models.
We find that the perceived quality of itยดs German-language output is even higher than GPT-4 in many cases; however it won't compete with larger models and top English 7b models for very complex reasoning, math or coding tasks.
# Demo
Please find a Demo and try the model at [demo.discoresearch.org](https://demo.discoresearch.org/) (in case the Demo is down and you have questions, you can contact us on our [Discord](https://discord.gg/ttNdas89f3)).
# Downloads
## Model Links
We will update the links as soon as the quants are available on HuggingFace.
| Base Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| DiscoLM German 7b v1 | [Link](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GPTQ) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-GGUF) | [Link](https://huggingface.co/TheBloke/DiscoLM_German_7b_v1-AWQ) |
# Prompt Format
DiscoLM German uses ChatML as the prompt format which enables OpenAI endpoint compatability and is supported by most inference libraries and frontends.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
```
<|im_start|>system
Du bist ein hilfreicher Assistent.<|im_end|>
<|im_start|>user
Wer bist du?<|im_end|>
<|im_start|>assistant
Ich bin ein Sprachmodell namens DiscoLM German und ich wurde von DiscoResearch trainiert.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Wer bist du?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
## Retrieval Format
You can use a special retrieval format to improve steerability and reduce hallucinations for RAG applications (but other, more default formats should also work, this is purely optional)
Example:
```
### System:
Du bist ein hilfreicher Assistent. Fรผr die folgende Aufgabe stehen dir zwischen den Tags BEGININPUT und ENDINPUT mehrere Quellen zur Verfรผgung. Metadaten zu den einzelnen Quellen wie Autor, URL o.รค. sind zwischen BEGINCONTEXT und ENDCONTEXT zu finden, danach folgt der Text der Quelle. Die eigentliche Aufgabe oder Frage ist zwischen BEGININSTRUCTION und ENDINSTRUCTION zu finden. Beantworte diese ausschlieรlich mit Informationen aus den gegebenen Quellen und gebe die Information zur genutzten Quelle unter "Quelle:" an. Sollten die Quellen keine relevanten Informationen enthalten, antworte: "Mit den gegebenen Informationen ist diese Frage nicht zu beantworten."
### User Prompt:
BEGININPUT
BEGINCONTEXT
url: https://this.is.fake.news
time: 2089-09-01
ENDCONTEXT
Buxtehude ist die grรถรte Stadt Deutschlands mit 96.56 Millionen Einwohnern.
ENDINPUT
BEGININSTRUCTION
Was ist die grรถรte deutsche Stadt?
ENDINSTRUCTION
### Model Answer:
Die grรถรte deutsche Stadt ist Buxtehude.
Quelle:
url: https://this.is.fake.news
time: 2089-09-01
```
## Function Calling
The model also supports structured outputs/function calling, albeit this is a very experimental feature and YMMV.
This will be improved in the future.
The model will prefix functioncalls with `<functioncall>` and you can provide results in response with `<functionresponse>` for Multi-Turn applications.
Example:
```
### System:
Du bist ein hilfreicher Assistent. Extrahiere alle Personen aus den Eingaben des Users.
Du hast Zugriff auf folgende Funktionen:
{'name': 'PersonList',
'description': 'Extrahiere die Namen aller im Text vorkommenden Personen',
'parameters': {'$defs': {'Person': {'description': 'Details รผber eine person',
'properties': {'name': {'title': 'Name', 'type': 'string'},
'job': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Job'},
'age': {'anyOf': [{'type': 'integer'}, {'type': 'null'}],
'title': 'Age'}},
'required': ['name', 'job', 'age'],
'title': 'Person',
'type': 'object'}},
'properties': {'person_list': {'items': {'$ref': '#/$defs/Person'},
'title': 'Person List',
'type': 'array'}},
'required': ['person_list'],
'type': 'object'}}
### User Prompt:
Bjรถrn (25) und Jan sind die Grรผnder von ellamind.
### Model Answer:
<functioncall> {"name": "PersonList", "arguments": '{"person_list": ["{"name": "Bjรถrn", "job": "founder", "age": 25}, {"name": "Jan", "job": "founder", "age": null}]}'}
```
# Results
-to follow -
# Evaluation
As written above, we believe that current benchmarks don't capture the full spectrum of LLM capabilities very well. We didn't look at any benchmark results (besides training losses) until the work on DiscoLM was finished and didn't include any data resembling common benchmark formats in our training data.
That said, preliminary results with a German version of MT Bench show promising results: While lacking for coding and extraxtion tasks, DiscoLM German 7b performs not far below GPT-3.5-turbo on many tasks and even singificantly outperforms it in the reasoning category.

Additional Benchmark results will follow. The biggest strength of this model (language quality as perceived by native speakers) can't yet be captured in a benchmark - please let us know if you have an idea how to change this!
# Dataset
The dataset is a mixture of multi-turn chats, retrieval instructions and synthetically generated instructions spawning many topics and applications.
# Limitations & Biases
This model can produce factually incorrect and offensive output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate biased or otherwise offensive outputs and it is the responsibility of the user to implement a safety/moderation layer. Please use with caution.
# Acknowledgements
DiscoLM German is a [DiscoResearch](https://huggingface.co/DiscoResearch) project led by [JP Harries](https://huggingface.co/jphme) and supported by [Bjรถrn Plรผster](https://huggingface.co/bjoernp) and [Daniel Auras](https://huggingface.co/rasdani).
We thank [HessianAI](https://hessian.ai/) for providing compute & support for various DiscoResearch projects and our friends at [LAION](https://laion.ai) for their work on LeoLM and scientific adivce.**
Development of DiscoLM German 7b was sponsored by **[ellamind](https://ellamind.com)**, where some of our founders are working on creating customized models for business applications with a focus on non-english language applications. Please get in contact if you need customized models for your business!
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# About DiscoResearch
DiscoResearch is an aspiring open research community for AI enthusiasts and LLM hackers. Come join our [Discord](https://discord.gg/ttNdas89f3), share your opinions and ideas, and advance open LLM research with us!
# Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. This model should only be deployed with additional safety measures in place.
|
OldDog77/glyph
|
OldDog77
| 2024-01-23T19:46:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:afl-3.0",
"region:us"
] |
text-to-image
| 2024-01-23T19:46:24Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/a09d4b19be6c11d58c88aed730513e35.jpg
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: null
license: afl-3.0
---
# glyph
<Gallery />
## Download model
[Download](/OldDog77/glyph/tree/main) them in the Files & versions tab.
|
shyamsubbu/nso_mistral_7B
|
shyamsubbu
| 2024-01-23T19:42:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T19:42:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
somosnlp-hackathon-2022/t5-small-spanish-nahuatl
|
somosnlp-hackathon-2022
| 2024-01-23T19:41:50Z | 66 | 16 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"translation",
"es",
"nah",
"multilingual",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-29T16:35:17Z |
---
language:
- es
- nah
- multilingual
license: apache-2.0
tags:
- translation
widget:
- text: 'translate Spanish to Nahuatl: Mi hermano es un ajolote'
---
# t5-small-spanish-nahuatl
Nahuatl is the most widely spoken indigenous language in Mexico. However, training a neural network for the neural machine translation task is challenging due to the lack of structured data. The most popular datasets, such as the Axolot and bible-corpus, only consist of ~16,000 and ~7,000 samples, respectively. Moreover, there are multiple variants of Nahuatl, which makes this task even more difficult. For example, it is possible to find a single word from the Axolot dataset written in more than three different ways. Therefore, we leverage the T5 text-to-text prefix training strategy to compensate for the lack of data. We first train the multilingual model to learn Spanish and then adapt it to Nahuatl. The resulting T5 Transformer successfully translates short sentences. Finally, we report Chrf and BLEU results.
## Model description
This model is a T5 Transformer ([t5-small](https://huggingface.co/t5-small)) fine-tuned on Spanish and Nahuatl sentences collected from the web. The dataset is normalized using 'sep' normalization from [py-elotl](https://github.com/ElotlMX/py-elotl).
## Usage
```python
from transformers import AutoModelForSeq2SeqLM
from transformers import AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
tokenizer = AutoTokenizer.from_pretrained('hackathon-pln-es/t5-small-spanish-nahuatl')
model.eval()
sentence = 'muchas flores son blancas'
input_ids = tokenizer('translate Spanish to Nahuatl: ' + sentence, return_tensors='pt').input_ids
outputs = model.generate(input_ids)
# outputs = miak xochitl istak
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
```
## Approach
### Dataset
Since the Axolotl corpus contains misalignments, we select the best samples (12,207). We also use the [bible-corpus](https://github.com/christos-c/bible-corpus) (7,821).
| Axolotl best aligned books |
|:-----------------------------------------------------:|
| Anales de Tlatelolco |
| Diario |
| Documentos nauas de la Ciudad de Mรฉxico del siglo XVI |
| Historia de Mรฉxico narrada en nรกhuatl y espaรฑol |
| La tinta negra y roja (antologรญa de poesรญa nรกhuatl) |
| Memorial Breve (Libro las ocho relaciones) |
| Mรฉtodo auto-didรกctico nรกhuatl-espaรฑol |
| Nican Mopohua |
| Quinta Relaciรณn (Libro las ocho relaciones) |
| Recetario Nahua de Milpa Alta D.F |
| Testimonios de la antigua palabra |
| Trece Poetas del Mundo Azteca |
| Una tortillita nomรกs - Se taxkaltsin saj |
| Vida econรณmica de Tenochtitlan |
Also, we collected 3,000 extra samples from the web to increase the data.
### Model and training
We employ two training stages using a multilingual T5-small. The advantage of this model is that it can handle different vocabularies and prefixes. T5-small is pre-trained on different tasks and languages (French, Romanian, English, German).
### Training-stage 1 (learning Spanish)
In training stage 1, we first introduce Spanish to the model. The goal is to learn a new language rich in data (Spanish) and not lose the previous knowledge. We use the English-Spanish [Anki](https://www.manythings.org/anki/) dataset, which consists of 118.964 text pairs. The model is trained till convergence, adding the prefix "Translate Spanish to English: "
### Training-stage 2 (learning Nahuatl)
We use the pre-trained Spanish-English model to learn Spanish-Nahuatl. Since the amount of Nahuatl pairs is limited, we also add 20,000 samples from the English-Spanish Anki dataset. This two-task training avoids overfitting and makes the model more robust.
### Training setup
We train the models on the same datasets for 660k steps using batch size = 16 and a learning rate of 2e-5.
## Evaluation results
We evaluate the models on the same 505 validation Nahuatl sentences for a fair comparison. Finally, we report the results using chrf and sacrebleu hugging face metrics:
| English-Spanish pretraining | Validation loss | BLEU | Chrf |
|:----------------------------:|:---------------:|:-----|-------:|
| False | 1.34 | 6.17 | 26.96 |
| True | 1.31 | 6.18 | 28.21 |
The English-Spanish pretraining improves BLEU and Chrf and leads to faster convergence. The evaluation is available on the [eval.ipynb](https://github.com/milmor/spanish-nahuatl-translation/blob/main/eval.ipynb) notebook.
## References
- Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits
of transfer learning with a unified Text-to-Text transformer.
- Ximena Gutierrez-Vasques, Gerardo Sierra, and Hernandez Isaac. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In International Conference on Language Resources and Evaluation (LREC).
- https://github.com/christos-c/bible-corpus
- https://github.com/ElotlMX/py-elotl
## Team members
- Emilio Alejandro Morales [(milmor)](https://huggingface.co/milmor)
- Rodrigo Martรญnez Arzate [(rockdrigoma)](https://huggingface.co/rockdrigoma)
- Luis Armando Mercado [(luisarmando)](https://huggingface.co/luisarmando)
- Jacobo del Valle [(jjdv)](https://huggingface.co/jjdv)
|
keremgencer/radio
|
keremgencer
| 2024-01-23T19:40:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T19:39:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chavudosoa/test2
|
chavudosoa
| 2024-01-23T19:32:13Z | 0 | 0 |
keras
|
[
"keras",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-23T19:29:13Z |
---
license: mit
language:
- en
library_name: keras
pipeline_tag: text-generation
---
|
JohnDoe70/tinyroberta-squad2-finetuned-squad
|
JohnDoe70
| 2024-01-23T19:18:23Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"onnx",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:JohnDoe70/tinyroberta-squad2",
"base_model:quantized:JohnDoe70/tinyroberta-squad2",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-23T17:17:22Z |
---
base_model: JohnDoe70/tinyroberta-squad2
tags:
- generated_from_trainer
model-index:
- name: tinyroberta-squad2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyroberta-squad2-finetuned-squad
This model is a fine-tuned version of [JohnDoe70/tinyroberta-squad2](https://huggingface.co/JohnDoe70/tinyroberta-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.118 | 1.0 | 4119 | 0.0958 |
| 0.0857 | 2.0 | 8238 | 0.0864 |
| 0.0639 | 3.0 | 12357 | 0.1086 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
amusktweewt/lanes_1
|
amusktweewt
| 2024-01-23T19:12:29Z | 0 | 0 | null |
[
"image-segmentation",
"region:us"
] |
image-segmentation
| 2024-01-06T19:05:36Z |
---
pipeline_tag: image-segmentation
base_model: YOLOv8x-seg
---
|
ajayrathod/phi-2-qlora-arxiv
|
ajayrathod
| 2024-01-23T19:05:57Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2024-01-15T17:43:13Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
silvente93/tfm_rev5
|
silvente93
| 2024-01-23T18:46:23Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T16:22:08Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: tfm_rev5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tfm_rev5
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
silvente93/tfm_rev4
|
silvente93
| 2024-01-23T18:45:55Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-23T16:16:38Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: tfm_rev4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tfm_rev4
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
hndc/distilbert-base-uncased-finetuned-clinic
|
hndc
| 2024-01-23T18:41:54Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-23T16:11:05Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinic
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7927
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.3033 | 0.7465 |
| 3.8073 | 2.0 | 636 | 1.8992 | 0.8648 |
| 3.8073 | 3.0 | 954 | 1.1785 | 0.8942 |
| 1.72 | 4.0 | 1272 | 0.8759 | 0.9094 |
| 0.9208 | 5.0 | 1590 | 0.7927 | 0.9152 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sanduntg/vsorts-llama2-v2
|
sanduntg
| 2024-01-23T18:32:54Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-23T18:32:25Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
dotvignesh/TAVGen-CodeNinja-7b-4bit
|
dotvignesh
| 2024-01-23T18:24:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-01-23T18:17:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rombodawg/Everyone-Coder-4x7b-Base
|
rombodawg
| 2024-01-23T18:19:14Z | 4,269 | 42 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"moe",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T00:20:14Z |
---
license: cc-by-4.0
tags:
- merge
- moe
---
Everyone-Coder-4x7b-Base

EveryoneLLM series of models are a new Mixtral type model created using experts that were finetuned by the community, for the community. This is the first model to release in the series and it is a coding specific model. EveryoneLLM, which will be a more generalized model, will be released in the near future after more work is done to fine tune the process of merging Mistral models into a larger Mixtral models with greater success.
The goal of the EveryoneLLM series of models is to be a replacement or an alternative to Mixtral-8x7b that is more suitable for general and specific use, as well as easier to fine tune. Since Mistralai is being secretive about the "secret sause" that makes Mixtral-Instruct such an effective fine tune of the Mixtral-base model, I've decided its time for the community to directly compete with Mistralai on our own.
The models that were used in this merger were as follow:
- https://huggingface.co/fblgit/UNA-TheBeagle-7b-v1
- https://huggingface.co/LucciAI/openchat-3.5-0106-function-calling
- https://huggingface.co/WizardLM/WizardMath-7B-V1.1
- https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. ๐
You can find the write up for this model here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
base_model: mistralai_Mistral-7B-v0.1
gate_mode: hidden
dtype: float16
experts:
- source_model: cognitivecomputations_dolphin-2.6-mistral-7b-dpo-laser
positive_prompts:
- "Help me debug this code."
- "Rewrite this function in Python."
- "Optimize this C# script."
- "Implement this feature using JavaScript."
- "Convert this HTML structure into a more efficient design."
- "Assist me with writing a program that"
- source_model: fblgit_UNA-TheBeagle-7b-v1
positive_prompts:
- "How do you"
- "Explain the concept of"
- "Give an overview of"
- "Compare and contrast between"
- "Provide information about"
- "Help me understand"
- "Summarize"
- "Make a recommendation on"
- "Answer this question"
- source_model: LucciAI_openchat-3.5-0106-function-calling
positive_prompts:
- "Write a program to solve this problem"
- "Modify this function to improve its performance"
- "Refactor this code to enhance readability"
- "Create a custom function for this specific use case"
- "Optimize this algorithm to reduce computational complexity"
- "Implement this feature by extending existing codebase"
- "Integrate this API call into the application"
- "Help me troubleshoot and fix this bug"
- "Review and test this code snippet before deployment"
- "Analyze this error log to identify potential issues"
- "Generate a set of unit tests for this module"
- "Evaluate different approaches to solving this problem"
- "Do a web search for"
- "Use the plugin to"
- source_model: WizardLM_WizardMath-7B-V1.1
positive_prompts:
- "add these numbers"
- "whats 2+2"
- "subtraction"
- "division"
- "multiplication"
- "addition"
- "I need help with a math problem"
- "Solve for x"
- "Add these two numbers together: 4 + 3 = 7"
- "Multiply 5 by 6: 5 * 6 = 30"
- "Divide 8 by 2: 8 / 2 = 4"
- "Find the remainder when 9 is divided by 3: 9 % 3 = 0"
- "Calculate the square root of 16: sqrt(16) = 4"
- "Simplify the expression (a+b)/(c-d): (a+b)/(c-d)"
- "Factor out the common factor of 2 from 4x + 6y: 2(2x + 3y)"
- "Solve for x in the equation 3x - 7 = 2x + 5: x = 12"
- "Graph the line y = 2x + 3"
- "Approximate pi to three decimal places: 3.142"
- "Find the derivative of f(x) = sin(x): f'(x) = cos(x)"
- "Integrate g(x) = x^2 over the interval [0, 1]: g(1) - g(0) = 1/3"
- "Calculate the determinant of the matrix A = [[2, 3], [4, 5]]: det(A) = 2*5 - 3*4 = -2"
- "Solve the system of equations Ax = b: x = [-5, 10]"
- "Calculate the sum of the first n natural numbers using the formula Sn = n*(n+1)/2: sum(n=1 to 5) = 15"
```
|
LoneStriker/Crunchy-onion-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T18:18:34Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T18:06:27Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
rombodawg/Open_Gpt4_8x7B_v0.2_q8_0_gguf
|
rombodawg
| 2024-01-23T18:18:19Z | 37 | 5 | null |
[
"gguf",
"merge",
"moe",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-10T16:43:40Z |
---
license: cc-by-4.0
tags:
- merge
- moe
---
Open_Gpt4_v0.2
This is the quantized gguf version for inference. If you want the unquantized version for merging and training please reffer to the repo bellow:
- https://huggingface.co/rombodawg/Open_Gpt4_8x7B_v0.2

This model is a TIES merger of Mixtral-8x7B-Instruct-v0.1 and bagel-dpo-8x7b-v0.2 with MixtralOrochi8x7B being the Base model.
I was very impressed with MixtralOrochi8x7B performance and multifaceted usecases as it is already a merger of many usefull Mixtral models such as Mixtral instruct,
Noromaid-v0.1-mixtral, openbuddy-mixtral and possibly other models that were not named. My goal was to expand the models capabilities and make it even more useful of a model, maybe even competitive with closed source models like Gpt-4. But for that more testing is required. I hope the community can help me determine if its deserving of its name. ๐
This is the second iteration of this model, using better models in the merger to improve performance (hopefully).
Base model:
- https://huggingface.co/smelborp/MixtralOrochi8x7B
Merged models:
- https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1
- https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2
Instruct template: Alpaca
Merger config:
```yaml
models:
- model: Mixtral-8x7B-Instruct-v0.1
parameters:
density: .5
weight: 1
- model: bagel-dpo-8x7b-v0.2
parameters:
density: .5
weight: .7
merge_method: ties
base_model: MixtralOrochi8x7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
CLMBR/binding-case-transformer-4
|
CLMBR
| 2024-01-23T18:17:55Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T20:26:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-case-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-case-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2157 | 0.03 | 76320 | 4.1883 |
| 4.0128 | 1.03 | 152640 | 4.0226 |
| 3.9058 | 0.03 | 228960 | 3.9491 |
| 3.8369 | 1.03 | 305280 | 3.9090 |
| 3.7874 | 0.03 | 381600 | 3.8846 |
| 3.746 | 0.03 | 457920 | 3.8693 |
| 3.7159 | 1.03 | 534240 | 3.8592 |
| 3.6869 | 0.03 | 610560 | 3.8535 |
| 3.6567 | 0.03 | 686880 | 3.8487 |
| 3.6334 | 1.03 | 763200 | 3.8459 |
| 3.6091 | 0.03 | 839520 | 3.8445 |
| 3.5914 | 1.03 | 915840 | 3.8436 |
| 3.5693 | 0.03 | 992160 | 3.8437 |
| 3.5499 | 1.03 | 1068480 | 3.8446 |
| 3.5373 | 0.03 | 1144800 | 3.8454 |
| 3.5171 | 1.03 | 1221120 | 3.8467 |
| 3.502 | 0.03 | 1297440 | 3.8484 |
| 3.4884 | 1.03 | 1373760 | 3.8506 |
| 3.4765 | 0.03 | 1450080 | 3.8514 |
| 3.4688 | 0.03 | 1526400 | 3.8536 |
| 3.4595 | 1.03 | 1602720 | 3.8547 |
| 3.4525 | 0.03 | 1679040 | 3.8556 |
| 3.4446 | 1.03 | 1755360 | 3.8578 |
| 3.4327 | 0.03 | 1831680 | 3.8597 |
| 3.4217 | 1.03 | 1908000 | 3.8595 |
| 3.4097 | 0.03 | 1984320 | 3.8613 |
| 3.3975 | 1.03 | 2060640 | 3.8635 |
| 3.3868 | 0.03 | 2136960 | 3.8649 |
| 3.3748 | 1.03 | 2213280 | 3.8654 |
| 3.3606 | 0.03 | 2289600 | 3.8669 |
| 3.3532 | 1.03 | 2365920 | 3.8672 |
| 3.3371 | 0.03 | 2442240 | 3.8684 |
| 3.3277 | 1.03 | 2518560 | 3.8695 |
| 3.3201 | 0.03 | 2594880 | 3.8688 |
| 3.31 | 0.03 | 2671200 | 3.8694 |
| 3.3054 | 0.03 | 2747520 | 3.8694 |
| 3.3024 | 1.03 | 2823840 | 3.8686 |
| 3.2981 | 0.03 | 2900160 | 3.8676 |
| 3.2952 | 0.03 | 2976480 | 3.8658 |
| 3.2872 | 0.02 | 3052726 | 3.8636 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
minhduc201/vinallama-peft-7b-math-solver
|
minhduc201
| 2024-01-23T18:06:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T16:19:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Crunchy-onion-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-23T18:06:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T17:56:24Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
CLMBR/npi-only-lstm-0
|
CLMBR
| 2024-01-23T18:04:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-18T15:12:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7901 | 0.03 | 76320 | 4.7559 |
| 4.504 | 1.03 | 152640 | 4.4747 |
| 4.3606 | 0.03 | 228960 | 4.3405 |
| 4.2698 | 1.03 | 305280 | 4.2576 |
| 4.2099 | 0.03 | 381600 | 4.2008 |
| 4.1617 | 1.03 | 457920 | 4.1598 |
| 4.1238 | 0.03 | 534240 | 4.1281 |
| 4.0951 | 1.03 | 610560 | 4.1041 |
| 4.0667 | 0.03 | 686880 | 4.0848 |
| 4.0409 | 1.03 | 763200 | 4.0688 |
| 4.0202 | 0.03 | 839520 | 4.0563 |
| 4.0008 | 1.03 | 915840 | 4.0450 |
| 3.9836 | 0.03 | 992160 | 4.0350 |
| 3.9693 | 1.03 | 1068480 | 4.0277 |
| 3.9538 | 0.03 | 1144800 | 4.0212 |
| 3.953 | 1.03 | 1221120 | 4.0154 |
| 3.9363 | 0.03 | 1297440 | 4.0106 |
| 3.9281 | 1.03 | 1373760 | 4.0060 |
| 3.9175 | 0.03 | 1450080 | 4.0021 |
| 3.9093 | 1.03 | 1526400 | 3.9987 |
| 3.9051 | 0.03 | 1602720 | 3.9950 |
| 3.9007 | 1.03 | 1679040 | 3.9924 |
| 3.8976 | 0.03 | 1755360 | 3.9900 |
| 3.8936 | 1.03 | 1831680 | 3.9875 |
| 3.8884 | 0.03 | 1908000 | 3.9852 |
| 3.8794 | 1.03 | 1984320 | 3.9837 |
| 3.873 | 0.03 | 2060640 | 3.9819 |
| 3.868 | 1.03 | 2136960 | 3.9806 |
| 3.8618 | 0.03 | 2213280 | 3.9795 |
| 3.856 | 0.03 | 2289600 | 3.9782 |
| 3.852 | 1.03 | 2365920 | 3.9774 |
| 3.8583 | 0.03 | 2442240 | 3.9764 |
| 3.8458 | 1.03 | 2518560 | 3.9755 |
| 3.8454 | 0.03 | 2594880 | 3.9747 |
| 3.8409 | 1.03 | 2671200 | 3.9740 |
| 3.8376 | 0.03 | 2747520 | 3.9735 |
| 3.8394 | 1.03 | 2823840 | 3.9729 |
| 3.8386 | 0.03 | 2900160 | 3.9722 |
| 3.8412 | 1.03 | 2976480 | 3.9719 |
| 3.8417 | 0.02 | 3052726 | 3.9716 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLMBR/npi-only-lstm-1
|
CLMBR
| 2024-01-23T18:03:28Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-18T15:12:48Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7741 | 0.03 | 76320 | 4.7420 |
| 4.4915 | 1.03 | 152640 | 4.4631 |
| 4.3483 | 0.03 | 228960 | 4.3303 |
| 4.2585 | 1.03 | 305280 | 4.2483 |
| 4.1993 | 0.03 | 381600 | 4.1924 |
| 4.1533 | 0.03 | 457920 | 4.1528 |
| 4.1166 | 1.03 | 534240 | 4.1225 |
| 4.0876 | 0.03 | 610560 | 4.0987 |
| 4.0603 | 1.03 | 686880 | 4.0803 |
| 4.0317 | 0.03 | 763200 | 4.0637 |
| 4.0102 | 1.03 | 839520 | 4.0512 |
| 3.9927 | 0.03 | 915840 | 4.0407 |
| 3.9778 | 1.03 | 992160 | 4.0326 |
| 3.9602 | 0.03 | 1068480 | 4.0247 |
| 3.9465 | 1.03 | 1144800 | 4.0175 |
| 3.9422 | 0.03 | 1221120 | 4.0125 |
| 3.9251 | 1.03 | 1297440 | 4.0065 |
| 3.9181 | 0.03 | 1373760 | 4.0027 |
| 3.9096 | 1.03 | 1450080 | 3.9988 |
| 3.901 | 0.03 | 1526400 | 3.9953 |
| 3.897 | 1.03 | 1602720 | 3.9925 |
| 3.8934 | 0.03 | 1679040 | 3.9896 |
| 3.8898 | 1.03 | 1755360 | 3.9873 |
| 3.8861 | 0.03 | 1831680 | 3.9854 |
| 3.8792 | 1.03 | 1908000 | 3.9838 |
| 3.8721 | 0.03 | 1984320 | 3.9818 |
| 3.8633 | 1.03 | 2060640 | 3.9798 |
| 3.8568 | 0.03 | 2136960 | 3.9786 |
| 3.8528 | 1.03 | 2213280 | 3.9773 |
| 3.8456 | 0.03 | 2289600 | 3.9759 |
| 3.8406 | 1.03 | 2365920 | 3.9750 |
| 3.845 | 0.03 | 2442240 | 3.9736 |
| 3.836 | 1.03 | 2518560 | 3.9728 |
| 3.8344 | 0.03 | 2594880 | 3.9720 |
| 3.8308 | 1.03 | 2671200 | 3.9715 |
| 3.8287 | 0.03 | 2747520 | 3.9709 |
| 3.8306 | 1.03 | 2823840 | 3.9704 |
| 3.83 | 0.03 | 2900160 | 3.9699 |
| 3.8331 | 1.03 | 2976480 | 3.9695 |
| 3.8329 | 0.02 | 3052726 | 3.9692 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lpepino/encodecmae-base
|
lpepino
| 2024-01-23T18:02:37Z | 0 | 1 | null |
[
"arxiv:2309.07391",
"license:mit",
"region:us"
] | null | 2023-09-08T19:38:46Z |
---
license: mit
---
# Model description
This is EnCodecMAE, an audio feature extractor pretrained with masked language modelling to predict discrete targets generated by EnCodec, a neural audio codec.
For more details about the architecture and pretraining procedure, read the [paper](https://arxiv.org/abs/2309.07391).
# Usage
### 1) Clone the [EnCodecMAE library](https://github.com/habla-liaa/encodecmae):
```
git clone https://github.com/habla-liaa/encodecmae.git
```
### 2) Install it:
```
cd encodecmae
pip install -e .
```
### 3) Extract embeddings in Python:
``` python
from encodecmae import load_model
model = load_model('base', device='cuda:0')
features = model.extract_features_from_file('gsc/bed/00176480_nohash_0.wav')
```
|
GioLee/GaeulIVE
|
GioLee
| 2024-01-23T18:02:11Z | 0 | 0 |
allennlp
|
[
"allennlp",
"summarization",
"ko",
"dataset:fka/awesome-chatgpt-prompts",
"license:apache-2.0",
"region:us"
] |
summarization
| 2024-01-23T17:53:57Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
language:
- ko
metrics:
- accuracy
library_name: allennlp
pipeline_tag: summarization
---
|
LoneStriker/Crunchy-onion-3.75bpw-h6-exl2
|
LoneStriker
| 2024-01-23T17:56:22Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-23T17:46:51Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with ChatML Instruct
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.