modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
LoneStriker/Senku-70B-Full-3.5bpw-h6-exl2
|
LoneStriker
| 2024-02-07T14:43:53Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T14:27:46Z |
---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
xshini/HiguchiKaede
|
xshini
| 2024-02-07T14:42:29Z | 3 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T14:37:25Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: runwayml/stable-diffusion-v1-5
license: creativeml-openrail-m
---
https://civitai.com/models/18732/higuchi-kaede-nijisanji
|
asorokoumov/ppo-LunarLander-v2
|
asorokoumov
| 2024-02-07T14:42:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T14:18:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.35 +/- 22.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Wajid333/a2c-PandaReachDense-v3
|
Wajid333
| 2024-02-07T14:36:07Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T14:31:51Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.20 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Senku-70B-Full-2.65bpw-h6-exl2
|
LoneStriker
| 2024-02-07T14:27:45Z | 8 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T14:16:49Z |
---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-1e-4
|
kanishka
| 2024-02-07T14:23:34Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual-babylm-only_other_det_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T15:51:09Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual-babylm-only_other_det_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-1e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual-babylm-only_other_det_removal
type: kanishka/counterfactual-babylm-only_other_det_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.40654968657553286
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-only_other_det_removal-1e-4
This model was trained from scratch on the kanishka/counterfactual-babylm-only_other_det_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4247
- Accuracy: 0.4065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0532 | 1.0 | 18597 | 4.2579 | 0.3085 |
| 3.566 | 2.0 | 37194 | 3.7605 | 0.3620 |
| 3.3886 | 3.0 | 55791 | 3.5962 | 0.3806 |
| 3.2899 | 4.0 | 74388 | 3.5175 | 0.3894 |
| 3.2214 | 5.0 | 92985 | 3.4618 | 0.3939 |
| 3.1702 | 6.0 | 111582 | 3.4252 | 0.3979 |
| 3.1294 | 7.0 | 130179 | 3.4255 | 0.3995 |
| 3.0899 | 8.0 | 148776 | 3.4190 | 0.4010 |
| 3.0639 | 9.0 | 167373 | 3.4041 | 0.4027 |
| 3.0329 | 10.0 | 185970 | 3.4231 | 0.4029 |
| 3.0093 | 11.0 | 204567 | 3.4100 | 0.4045 |
| 2.9859 | 12.0 | 223164 | 3.4097 | 0.4049 |
| 2.9662 | 13.0 | 241761 | 3.4043 | 0.4053 |
| 2.9424 | 14.0 | 260358 | 3.4046 | 0.4057 |
| 2.928 | 15.0 | 278955 | 3.4079 | 0.4059 |
| 2.908 | 16.0 | 297552 | 3.4119 | 0.4061 |
| 2.8912 | 17.0 | 316149 | 3.4119 | 0.4062 |
| 2.8716 | 18.0 | 334746 | 3.4159 | 0.4064 |
| 2.8589 | 19.0 | 353343 | 3.4223 | 0.4065 |
| 2.8424 | 20.0 | 371940 | 3.4247 | 0.4065 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
HeydarS/flan-t5-base_peft_v23
|
HeydarS
| 2024-02-07T14:16:02Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-base",
"base_model:adapter:google/flan-t5-base",
"region:us"
] | null | 2024-02-07T14:16:00Z |
---
library_name: peft
base_model: google/flan-t5-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
danaleee/CL_rank10_iter500_noval
|
danaleee
| 2024-02-07T14:15:24Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T13:36:38Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks teddybear
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/CL_rank10_iter500_noval
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
|
LoneStriker/miquliz-120b-2.9bpw-h6-exl2
|
LoneStriker
| 2024-02-07T14:09:23Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"de",
"fr",
"es",
"it",
"base_model:152334H/miqu-1-70b-sf",
"base_model:merge:152334H/miqu-1-70b-sf",
"base_model:lizpreciatior/lzlv_70b_fp16_hf",
"base_model:merge:lizpreciatior/lzlv_70b_fp16_hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T22:49:50Z |
---
base_model:
- 152334H/miqu-1-70b-sf
- lizpreciatior/lzlv_70b_fp16_hf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
---
# miquliz-120b

- EXL2: [2.4bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.65bpw-h6-exl2) | 2.9bpw | [4.0bpw](https://huggingface.co/LoneStriker/miquliz-120b-4.0bpw-h6-exl2)
- GGUF: [IQ3_XXS](https://huggingface.co/wolfram/miquliz-120b-GGUF) | [Q4_K_S+Q4_K_M](https://huggingface.co/NanoByte/miquliz-120b-Q4-GGUF)
- HF: [wolfram/miquliz-120b](https://huggingface.co/wolfram/miquliz-120b)
This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit).
Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
See also: [🐺🐦⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
## Model Details
- Max Context: 32768 tokens
- Layers: 137
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [8, 24]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [17, 32]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [25, 40]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [33, 48]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [41, 56]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [49, 64]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [57, 72]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [65, 80]
model: 152334H/miqu-1-70b-sf
```
## Credits & Special Thanks
- 1st model:
- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
### Support
- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
#### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
|
LoneStriker/miquliz-120b-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T14:09:20Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"de",
"fr",
"es",
"it",
"base_model:152334H/miqu-1-70b-sf",
"base_model:merge:152334H/miqu-1-70b-sf",
"base_model:lizpreciatior/lzlv_70b_fp16_hf",
"base_model:merge:lizpreciatior/lzlv_70b_fp16_hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T23:08:22Z |
---
base_model:
- 152334H/miqu-1-70b-sf
- lizpreciatior/lzlv_70b_fp16_hf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
---
# miquliz-120b

- EXL2: [2.4bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.65bpw-h6-exl2) | [2.9bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.9bpw-h6-exl2) | 4.0bpw
- GGUF: [IQ3_XXS](https://huggingface.co/wolfram/miquliz-120b-GGUF) | [Q4_K_S+Q4_K_M](https://huggingface.co/NanoByte/miquliz-120b-Q4-GGUF)
- HF: [wolfram/miquliz-120b](https://huggingface.co/wolfram/miquliz-120b)
This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit).
Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
See also: [🐺🐦⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
## Model Details
- Max Context: 32768 tokens
- Layers: 137
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [8, 24]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [17, 32]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [25, 40]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [33, 48]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [41, 56]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [49, 64]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [57, 72]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [65, 80]
model: 152334H/miqu-1-70b-sf
```
## Credits & Special Thanks
- 1st model:
- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
### Support
- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
#### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-1e-4
|
kanishka
| 2024-02-07T14:06:52Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T15:34:39Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-1e-4
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal
type: kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.4057273905279679
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-only_measure_nps_as_singular_removal-1e-4
This model was trained from scratch on the kanishka/counterfactual-babylm-only_measure_nps_as_singular_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4267
- Accuracy: 0.4057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 4.0456 | 1.0 | 18600 | 4.2695 | 0.3100 |
| 3.5586 | 2.0 | 37200 | 3.7569 | 0.3640 |
| 3.3865 | 3.0 | 55800 | 3.5821 | 0.3801 |
| 3.2864 | 4.0 | 74400 | 3.5184 | 0.3877 |
| 3.2138 | 5.0 | 93000 | 3.4647 | 0.3930 |
| 3.1634 | 6.0 | 111600 | 3.4300 | 0.3973 |
| 3.1242 | 7.0 | 130200 | 3.4365 | 0.3982 |
| 3.0882 | 8.0 | 148800 | 3.4228 | 0.4004 |
| 3.0589 | 9.0 | 167400 | 3.4148 | 0.4012 |
| 3.0298 | 10.0 | 186000 | 3.4086 | 0.4025 |
| 3.0091 | 11.0 | 204600 | 3.4138 | 0.4031 |
| 2.982 | 12.0 | 223200 | 3.4183 | 0.4033 |
| 2.9628 | 13.0 | 241800 | 3.4182 | 0.4037 |
| 2.9451 | 14.0 | 260400 | 3.4063 | 0.4046 |
| 2.9249 | 15.0 | 279000 | 3.4066 | 0.4051 |
| 2.9046 | 16.0 | 297600 | 3.4134 | 0.4057 |
| 2.8879 | 17.0 | 316200 | 3.4187 | 0.4053 |
| 2.8659 | 18.0 | 334800 | 3.4161 | 0.4058 |
| 2.8577 | 19.0 | 353400 | 3.4254 | 0.4057 |
| 2.8337 | 20.0 | 372000 | 3.4267 | 0.4057 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Americo/phi-2-finetuned-farmatodo
|
Americo
| 2024-02-07T14:01:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T13:52:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DoctorKrazy/sbaitso
|
DoctorKrazy
| 2024-02-07T14:00:55Z | 0 | 0 | null |
[
"en",
"region:us"
] | null | 2024-02-07T13:56:12Z |
---
language:
- en
---
# Sbaitso AI Voice model for RVC
This is a voice model trained on sbaitso, most famously known for the voice of SCP 079 in the SCP : Containement Breach video game.
If you use this AI voice model please credit me by linking this page in the description.
|
OptimusAz/Comic
|
OptimusAz
| 2024-02-07T13:57:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-07T13:57:03Z |
Titel: Die Schlange und der Verräter
Panel 1:
(Weite Einstellung. Ein dunkler Wald mit dichten Bäumen und einem schmalen Pfad. Die Sonne scheint durch die Baumkronen. Im Vordergrund sehen wir eine Schlange, die elegant über den Pfad gleitet.)
Erzähler: In einem geheimnisvollen Wald, weit weg von jeglicher Zivilisation, lebte eine kluge Schlange namens Seraphina.
Panel 2:
(Nahaufnahme von Seraphina. Sie hat glänzende Schuppen und leuchtende Augen. Sie sieht misstrauisch aus.)
Seraphina: Dieser Wald birgt viele Geheimnisse. Ich muss vorsichtig sein und darauf achten, wem ich vertraue.
Panel 3:
(Seraphina nähert sich einem anderen Tier, das halb im Schatten liegt. Es ist ein fuchsähnliches Wesen mit einem schelmischen Ausdruck.)
Seraphina: Guten Tag, Fremder. Ich bin Seraphina. Was verschlägt dich in diesen Wald?
Panel 4:
(Das fuchsähnliche Wesen lächelt und entblößt seine spitzen Zähne. Es sieht bedrohlich aus.)
Fuchsähnliches Wesen: Ich bin Vex, und ich durchstreife diesen Wald auf der Suche nach Abenteuern. Vielleicht können wir zusammen auf Entdeckungsreise gehen?
Panel 5:
(Seraphina betrachtet Vex skeptisch. Ihre Augen schimmern verdächtig.)
Seraphina: Ich bin misstrauisch gegenüber Fremden, Vex. Warum sollte ich dir vertrauen?
Panel 6:
(Vex legt eine Pfote auf sein Herz und sieht Seraphina mit einem unschuldigen Blick an.)
Vex: Mein Herz ist rein, Seraphina. Ich schwöre, ich werde dir kein Leid zufügen. Ich suche nur nach einem Freund, mit dem ich diese Abenteuer teilen kann.
Panel 7:
(Seraphina denkt einen Moment nach, dann nickt sie langsam.)
Seraphina: Gut, Vex. Wir können zusammen reisen, aber sei gewarnt: Wenn du mich betrügst, wird es Konsequenzen geben.
Panel 8:
(Die beiden setzen ihre Reise durch den Wald fort, während die Sonne langsam untergeht. Seraphina bleibt wachsam, während Vex fröhlich plappert.)
Erzähler: Und so begann die ungewöhnliche Freundschaft zwischen Seraphina und Vex. Doch in den Schatten lauerte ein düsteres Geheimnis, das bald ans Licht kommen würde.
|
naviam/my-pet-dog
|
naviam
| 2024-02-07T13:53:00Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T13:48:56Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by naviam following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
|
bartowski/Kunocchini-7b-128k-test-exl2
|
bartowski
| 2024-02-07T13:51:43Z | 5 | 4 |
transformers
|
[
"transformers",
"mergekit",
"merge",
"alpaca",
"mistral",
"text-generation",
"base_model:Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context",
"base_model:merge:Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T13:35:21Z |
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Kunocchini-7b-128k-test
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/Test157t/Kunocchini-7b-128k-test
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Bartowski/Kunocchini-7b-128k-test-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/Kunocchini-7b-128k-test-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/Kunocchini-7b-128k-test-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/Bartowski/Kunocchini-7b-128k-test-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/Bartowski/Kunocchini-7b-128k-test-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Kunocchini-7b-128k-test-exl2 Kunocchini-7b-128k-test-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Kunocchini-7b-128k-test-exl2`:
```shell
mkdir Kunocchini-7b-128k-test-exl2
huggingface-cli download bartowski/Kunocchini-7b-128k-test-exl2 --local-dir Kunocchini-7b-128k-test-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Kunocchini-7b-128k-test-exl2-6_5
huggingface-cli download bartowski/Kunocchini-7b-128k-test-exl2 --revision 6_5 --local-dir Kunocchini-7b-128k-test-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Kunocchini-7b-128k-test-exl2-6.5
huggingface-cli download bartowski/Kunocchini-7b-128k-test-exl2 --revision 6_5 --local-dir Kunocchini-7b-128k-test-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
spsither/wav2vec2_run9.18
|
spsither
| 2024-02-07T13:45:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-07T13:44:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iamhack/wav2vec2-base-finetuned-ks-open-close
|
iamhack
| 2024-02-07T13:36:02Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-02-07T11:52:25Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks-open-close
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.998286586955712
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks-open-close
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0866 | 1.0 | 209 | 0.0388 | 0.9956 |
| 0.021 | 2.0 | 419 | 0.0162 | 0.9978 |
| 0.0172 | 3.0 | 629 | 0.0102 | 0.9985 |
| 0.0195 | 4.0 | 839 | 0.0083 | 0.9991 |
| 0.0188 | 4.98 | 1045 | 0.0100 | 0.9983 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ExAi/Claire-Mistral-7B-v0.1.3-exl2-4.0
|
ExAi
| 2024-02-07T13:34:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"pretrained",
"conversational",
"fr",
"arxiv:2311.16840",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T13:20:35Z |
---
language:
- fr
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: mistralai/Mistral-7B-v0.1
tags:
- pretrained
- conversational
widget:
- text: |-
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,
example_title: Request for a recipe
group: Dash
- text: |-
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Intervenant 2:] Bonjour Camille,
example_title: Request for a recipe
group: Intervenant
- text: |-
[Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Dominique:] Bonjour Camille,
example_title: Request for a recipe
group: FirstName
- text: |-
[Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Dominique Petit:] Bonjour Camille,
example_title: Request for a recipe
group: Named
inference:
parameters:
temperature: 1.0
max_new_tokens: 200
top_k: 10
---
# Claire-Mistral-7B-0.1
**Claire-Mistral-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
**adapted from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) on French conversational data.**
Claire-Mistral-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
A qualitatively better variant of this model is available under [Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1).
* [Typical usage](#typical-usage)
* [Typical prompts](#typical-prompts)
* [Training Details](#training-details)
* [Training Data](#training-data)
* [Training Procedure](#training-procedure)
* [Evaluation](#evaluation)
* [License](#license)
* [Acknowledgements](#acknowledgements)
* [Contact](#contact)
## Typical usage
```python
import transformers
import torch
model_name = "OpenLLM-France/Claire-Mistral-7B-0.1"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
load_in_4bit=True # For efficient inference, if supported by the GPU card
)
pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
generation_kwargs = dict(
num_return_sequences=1, # Number of variants to generate.
return_full_text= False, # Do not include the prompt in the generated text.
max_new_tokens=200, # Maximum length for the output text.
do_sample=True, top_k=10, temperature=1.0, # Sampling parameters.
pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning.
)
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
completions = pipeline(prompt, **generation_kwargs)
for completion in completions:
print(prompt + " […]" + completion['generated_text'])
```
This will print something like:
```
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale.
- Ah je ne connais pas cette recette.
- C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également.
- Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile.
- Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients.
- Très bien.
```
You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)).
### Typical prompts
Claire-Mistral-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
A monologue can be specified as a single line prompt (though keep in mind that the model might still return a dialogue because of its training):
```python
prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement"
```
A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
```python
prompt = """\
- Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
- Bonjour Camille,\
"""
```
A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number:
```python
prompt = """\
[Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Intervenant 2:] Bonjour Camille,\
"""
```
A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]`
where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
```python
prompt = """\
[Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
[Mr. Dominique Petit:] Bonjour Camille,\
"""
```
## Training Details
### Training Data
The training dataset is available at [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1)
and described in ["The Claire French Dialogue Dataset" (2023)](https://arxiv.org/abs/2311.16840).
Claire-Mistral-7B-0.1 was tuned from Mistral-7B-v0.1 on the following data distribution:
| **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
|-------------------------------|------------|------------------------------|-----------------------------------------------------|
| Parliamentary Proceedings | 135M | 35% | Assemblée Nationale |
| Theatre | 16M | 18% | Théâtre Classique, Théâtre Gratuit |
| Interviews | 6.4M | 29% | TCOF, CFPP, CFPB (ORFEO), ACSYNT, PFC, Valibel (ORFEO), ESLO|
| Free Conversations | 2.2M | 10% | CRFP (ORFEO), OFROM (ORFEO), CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO |
| Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) |
| Debates | 402k | <2% | FREDSum, ESLO |
| Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO |
| Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO |
Training data was augmented with the following techniques:
* varying the format used to indicate speech turns (dashes or [XXX:])
* substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
* removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
Long conversations were truncated at a maximum of 4096 tokens. Where possible, they were split between speaker turns.
While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Mistral-7B-v0.1 training data.
### Training Procedure
The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire).
Claire-Mistral-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
See [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details.
Claire-Mistral-7B-0.1 was trained on 8 A100 80GB GPUs for about 50 GPU hours.
Hyperparameters were the following:
| **Hyperparameter** | **Value** |
|--------------------|------------|
| Precision | `bfloat16` |
| Optimizer | AdamW |
| Learning rate | 1e-4 |
| Weight decay | 1e-2 |
| Batch size | 128 |
| LoRA rank | 16 |
| LoRA alpha | 32 |
| Dropout | 0.05 |
| gradient clipping | 1 |
## Evaluation
See the [Evaluation section of Claire-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-0.1#evaluation).
## License
Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
Claire-Mistral-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Acknowledgements
This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
Claire-Mistral-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang.
Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
## Contact
[email protected]
|
Kooten/Kunocchini-7b-128k-test-8bpw-exl2
|
Kooten
| 2024-02-07T13:32:02Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"alpaca",
"conversational",
"base_model:Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context",
"base_model:merge:Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T12:30:35Z |
---
base_model:
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context
library_name: transformers
tags:
- mergekit
- merge
- alpaca
- mistral
---
# Kunocchini-7b-128k-test
Exl2 quant of [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
## Contact
Kooten on discord
[ko-fi.com/kooten](https://ko-fi.com/kooten)
|
omarfarooq908/llama2-qlora-finetunined-french
|
omarfarooq908
| 2024-02-07T13:26:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T13:25:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joseTfm/tfm_qa_torch_spanish
|
joseTfm
| 2024-02-07T13:23:14Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-06T22:14:35Z |
---
base_model: dccuchile/distilbert-base-spanish-uncased
tags:
- generated_from_trainer
model-index:
- name: tfm_qa_torch_spanish
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tfm_qa_torch_spanish
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 2.8229 |
| No log | 2.0 | 6 | 2.6078 |
| No log | 3.0 | 9 | 2.5237 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
arpanl/Fine-Tuned_Model
|
arpanl
| 2024-02-07T13:14:26Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T12:03:42Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: Fine-Tuned_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-Tuned_Model
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
CLMBR/existential-there-quantifier-lstm-0
|
CLMBR
| 2024-02-07T13:12:06Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T10:12:23Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-0
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7923 | 0.03 | 76320 | 4.7567 |
| 4.5062 | 1.03 | 152640 | 4.4784 |
| 4.3611 | 0.03 | 228960 | 4.3434 |
| 4.2754 | 1.03 | 305280 | 4.2611 |
| 4.2127 | 0.03 | 381600 | 4.2044 |
| 4.1658 | 1.03 | 457920 | 4.1634 |
| 4.1265 | 0.03 | 534240 | 4.1321 |
| 4.093 | 1.03 | 610560 | 4.1083 |
| 4.0641 | 0.03 | 686880 | 4.0882 |
| 4.0398 | 1.03 | 763200 | 4.0726 |
| 4.0182 | 0.03 | 839520 | 4.0593 |
| 4.0039 | 1.03 | 915840 | 4.0482 |
| 3.9882 | 0.03 | 992160 | 4.0383 |
| 3.9712 | 1.03 | 1068480 | 4.0307 |
| 3.9598 | 0.03 | 1144800 | 4.0232 |
| 3.9485 | 1.03 | 1221120 | 4.0177 |
| 3.9388 | 0.03 | 1297440 | 4.0131 |
| 3.9269 | 0.03 | 1373760 | 4.0087 |
| 3.9167 | 1.03 | 1450080 | 4.0042 |
| 3.9134 | 0.03 | 1526400 | 4.0006 |
| 3.9061 | 0.03 | 1602720 | 3.9978 |
| 3.902 | 1.03 | 1679040 | 3.9954 |
| 3.8986 | 0.03 | 1755360 | 3.9927 |
| 3.8901 | 1.03 | 1831680 | 3.9912 |
| 3.8831 | 0.03 | 1908000 | 3.9885 |
| 3.8764 | 0.03 | 1984320 | 3.9866 |
| 3.87 | 0.03 | 2060640 | 3.9843 |
| 3.8692 | 1.03 | 2136960 | 3.9829 |
| 3.8652 | 0.03 | 2213280 | 3.9817 |
| 3.856 | 1.03 | 2289600 | 3.9807 |
| 3.8549 | 0.03 | 2365920 | 3.9794 |
| 3.8515 | 1.03 | 2442240 | 3.9785 |
| 3.8472 | 0.03 | 2518560 | 3.9777 |
| 3.8438 | 0.03 | 2594880 | 3.9771 |
| 3.8379 | 1.03 | 2671200 | 3.9760 |
| 3.841 | 0.03 | 2747520 | 3.9755 |
| 3.8389 | 0.03 | 2823840 | 3.9748 |
| 3.8408 | 1.03 | 2900160 | 3.9742 |
| 3.8396 | 0.03 | 2976480 | 3.9736 |
| 3.8366 | 1.02 | 3052726 | 3.9732 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Paquique/Taxi-v3
|
Paquique
| 2024-02-07T13:05:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T12:35:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Paquique/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
CLMBR/existential-there-quantifier-lstm-4
|
CLMBR
| 2024-02-07T13:03:37Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T10:12:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7774 | 0.03 | 76320 | 4.7429 |
| 4.4932 | 1.03 | 152640 | 4.4638 |
| 4.3479 | 0.03 | 228960 | 4.3308 |
| 4.2625 | 1.03 | 305280 | 4.2490 |
| 4.1981 | 0.03 | 381600 | 4.1928 |
| 4.1521 | 1.03 | 457920 | 4.1530 |
| 4.1154 | 0.03 | 534240 | 4.1230 |
| 4.0805 | 1.03 | 610560 | 4.0988 |
| 4.0514 | 0.03 | 686880 | 4.0793 |
| 4.0259 | 1.03 | 763200 | 4.0640 |
| 4.0056 | 0.03 | 839520 | 4.0506 |
| 3.9903 | 1.03 | 915840 | 4.0404 |
| 3.9761 | 0.03 | 992160 | 4.0308 |
| 3.9565 | 1.03 | 1068480 | 4.0234 |
| 3.9472 | 0.03 | 1144800 | 4.0168 |
| 3.9352 | 1.03 | 1221120 | 4.0114 |
| 3.9245 | 0.03 | 1297440 | 4.0061 |
| 3.9137 | 1.03 | 1373760 | 4.0013 |
| 3.9036 | 0.03 | 1450080 | 3.9982 |
| 3.8998 | 0.03 | 1526400 | 3.9949 |
| 3.8957 | 1.03 | 1602720 | 3.9922 |
| 3.891 | 0.03 | 1679040 | 3.9897 |
| 3.8872 | 1.03 | 1755360 | 3.9876 |
| 3.8784 | 0.03 | 1831680 | 3.9853 |
| 3.8704 | 1.03 | 1908000 | 3.9831 |
| 3.8615 | 0.03 | 1984320 | 3.9815 |
| 3.8584 | 0.03 | 2060640 | 3.9799 |
| 3.8554 | 1.03 | 2136960 | 3.9784 |
| 3.8507 | 0.03 | 2213280 | 3.9773 |
| 3.8436 | 0.03 | 2289600 | 3.9763 |
| 3.8417 | 1.03 | 2365920 | 3.9754 |
| 3.8366 | 0.03 | 2442240 | 3.9742 |
| 3.8328 | 1.03 | 2518560 | 3.9736 |
| 3.8293 | 0.03 | 2594880 | 3.9726 |
| 3.8258 | 0.03 | 2671200 | 3.9719 |
| 3.8263 | 0.03 | 2747520 | 3.9714 |
| 3.8265 | 1.03 | 2823840 | 3.9709 |
| 3.8291 | 0.03 | 2900160 | 3.9704 |
| 3.8271 | 1.03 | 2976480 | 3.9701 |
| 3.8234 | 0.02 | 3052726 | 3.9698 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
BhabhaAI/Mistral-translation-classify
|
BhabhaAI
| 2024-02-07T12:56:15Z | 4 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:BhabhaAI/translation-classify",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T05:50:03Z |
---
library_name: transformers
license: apache-2.0
datasets:
- BhabhaAI/translation-classify
language:
- en
---
# Mistral Translation Classify
This is a fine tuned model on the [translation-classify dataset](https://huggingface.co/datasets/BhabhaAI/translation-classify) to classify whether we should translate an example.
It achieves 94% accuracy on validation dataset.
## Examples
Some question when translated does not remain meaningful/correct. The goal is to avoid such examples.
This includes coding, word-count, spelling error detection etc. Take a look at [dataset](https://huggingface.co/datasets/BhabhaAI/translation-classify) for examples
|
yaneq/jan_azS4_SDXL_LoRA_500_9d94_
|
yaneq
| 2024-02-07T12:54:31Z | 4 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-07T12:44:32Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_azS4_SDXL_LoRA_500_9d94_
<Gallery />
## Model description
These are yaneq/jan_azS4_SDXL_LoRA_500_9d94_ LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_azS4_SDXL_LoRA_500_9d94_/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 500
- learning_rate: 1e-05
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2F69o1vZPLc7GJXGlpAMMH.jpg?alt=media&token=b01bdfc5-1645-49b4-ac96-726ab2a3fbc3
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2F8WWFXPruZHIDj9gfH3jx.jpg?alt=media&token=6c57b1ea-49fa-4321-83de-d59641f24aea
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2FK6UvnghSTdYvrdPpLYoq.jpg?alt=media&token=4eeafb6d-ce6f-417a-b6d8-e50c25ca4368
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2FVKPcuAllJieRnqxM6yfg.jpg?alt=media&token=fdbb8903-fad7-472c-a394-061a5dcef8aa
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2FcKSlO7eCieu2lR7aFa7u.jpg?alt=media&token=b4ffea94-ca2a-4cf2-bcf1-c49e764fe707
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2Fts5tpMOSpccBu5qqsTom.jpg?alt=media&token=b8b980ec-daff-46d6-b69b-9d697be73021
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2FxXz3PRjpU8X7Ws9DHyxk.jpg?alt=media&token=95cd6951-3f17-4c7f-9749-4f8fd8e500c6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FazS4WZxJtGAuVzhZxuys%2FazS4WZxJtGAuVzhZxuys%2FyyyIJgCkhtFaDGgHWrTO.jpg?alt=media&token=5ae68c94-86d8-483c-9cc5-9a00f124662e
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 3796.2920064926147
|
arnabmukherjee/ppo-LunarLander-v2
|
arnabmukherjee
| 2024-02-07T12:52:52Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T12:52:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 267.74 +/- 21.17
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chethanuk/classify_food_items
|
chethanuk
| 2024-02-07T12:48:37Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T09:12:27Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: classify_food_items
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classify_food_items
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5776
- Accuracy: 0.84
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.5846 | 0.99 | 62 | 2.5776 | 0.84 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mertllc/mms-tts-tur-twenties-male
|
mertllc
| 2024-02-07T12:44:58Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T12:05:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
UnaiGurbindo/speecht5_finetuned_voxpopuli_lt
|
UnaiGurbindo
| 2024-02-07T12:37:47Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-02-07T09:25:12Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- facebook/voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_lt_gg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_lt_gg
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4952
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5286 | 51.95 | 500 | 0.5118 |
| 0.4869 | 103.9 | 1000 | 0.4986 |
| 0.481 | 155.84 | 1500 | 0.4952 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
llmware/slim-topics-tool
|
llmware
| 2024-02-07T12:37:22Z | 101 | 6 |
transformers
|
[
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T17:33:56Z |
---
license: apache-2.0
---
# SLIM-TOPICS-TOOL
<!-- Provide a quick summary of what the model is/does. -->
**slim-topics-tool** is a 4_K_M quantized GGUF version of slim-topics, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-topics**](https://huggingface.co/llmware/slim-topics) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-topics-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-topics-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-topics-tool", verbose=True)
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("topics")
response = llm_fx.topics(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-topics-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
Ketengan-Diffusion/AnySomniumXL-v3.5
|
Ketengan-Diffusion
| 2024-02-07T12:37:14Z | 11 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"SDXL",
"art",
"stable-diffusion-XL",
"fantasy",
"anime",
"aiart",
"ketengan",
"AnySomniumXL",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-02-04T07:27:43Z |
---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- SDXL
- art
- stable-diffusion-XL
- fantasy
- anime
- aiart
- ketengan
- AnySomniumXL
pipeline_tag: text-to-image
library_name: diffusers
---
# AnySomniumXL v3.5 Model Showcase
<p align="center">
<img src="01.png" width=70% height=70%>
</p>
`Ketengan-Diffusion/AnySomniumXL v3.5` is a SDXL model that has been fine-tuned on [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).
This is enhanced version of AnySomniumXL v3
# Changelog over AnySomniumXL v3
* Better captioning process
* Better model generalizing
* Increased concept and character accuracy
* Better stylizing on untrained token
# Our Dataset Process Curation
# Our Dataset Process Curation
<p align="center">
<img src="Curation.png" width=70% height=70%>
</p>
Image source: [Source1](https://danbooru.donmai.us/posts/3143351) [Source2](https://danbooru.donmai.us/posts/3272710) [Source3](https://danbooru.donmai.us/posts/3320417)
Our dataset is scored using Pretrained CLIP+MLP Aesthetic Scoring model by https://github.com/christophschuhmann/improved-aesthetic-predictor, and We made adjusment into our script to detecting any text or watermark by utilizing OCR by pytesseract
This scoring method has scale between -1-100, we take the score threshold around 17 or 20 as minimum and 65-75 as maximum to pretain the 2D style of the dataset, Any images with text will returning -1 score. So any images with score below 17 or above 65 is deleted
The dataset curation proccess is using Nvidia T4 16GB Machine and takes about 7 days for curating 1.000.000 images.
Our dataset is scored using Pretrained CLIP+MLP Aesthetic Scoring model by https://github.com/christophschuhmann/improved-aesthetic-predictor, and We made adjusment into our script to detecting any text or watermark by utilizing OCR by pytesseract
This scoring method has scale between -1-100, we take the score threshold around 17 or 20 as minimum and 65-75 as maximum to pretain the 2D style of the dataset, Any images with text will returning -1 score. So any images with score below 17 or above 65 is deleted
The dataset curation proccess is using Nvidia T4 16GB Machine and takes about 2 days for curating 300.000 images.
# Captioning process
We using combination of proprietary Multimodal LLM and open source multimodal LLM such as LLaVa 1.5 as the captioning process which is resulting more complex result than using normal BLIP2. Any detail like the clothes, atmosphere, situation, scene, place, gender, skin, and others is generated by LLM.
This captioning process to captioning 133k images takes about 6 Days with NVIDIA Tesla A100 80GB PCIe. We still improving our script to generate caption faster. The minimum VRAM that required for this captioning process is 24GB VRAM which is not sufficient if we using NVIDIA Tesla T4 16GB
# Tagging Process
We simply using booru tags, that retrieved from booru boards so this could be tagged by manually by human hence make this tags more accurate.
# Official Demo
You can try our AnySomniumXL v3 for free on demo.ketengan.com
# Training Process
AnySomniumXL v3.5 Technical Specifications:
Batch Size: 25
Learning rate: 2e-6
Trained with a bucket size of 1280x1280
Shuffle Caption: Yes
Clip Skip: 2
Trained with 2x NVIDIA A100 80GB
# Recommended Resolution
Because it's trained with 1280x1280 resolution, so here the best resolution to get the full power of AnySomniumXL v3
* 1280x1280
* 1472x1088
* 1152x1408
* 1536x1024
* 1856x832
* 1024x1600
You can support me:
- on [Ko-FI](https://ko-fi.com/ncaix)
|
tensorops/whisper-small-th-cmv13-vanilla
|
tensorops
| 2024-02-07T12:31:35Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-07T12:30:13Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: whisper-small-th-cmv13-vanilla
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-th-cmv13-vanilla
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the cmv13-th-train+val dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
wolfram/miquliz-120b-GGUF
|
wolfram
| 2024-02-07T12:30:41Z | 0 | 4 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"de",
"fr",
"es",
"it",
"base_model:152334H/miqu-1-70b-sf",
"base_model:merge:152334H/miqu-1-70b-sf",
"base_model:lizpreciatior/lzlv_70b_fp16_hf",
"base_model:merge:lizpreciatior/lzlv_70b_fp16_hf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-05T23:29:59Z |
---
base_model:
- 152334H/miqu-1-70b-sf
- lizpreciatior/lzlv_70b_fp16_hf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
---
# miquliz-120b-GGUF

- EXL2: [2.4bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.65bpw-h6-exl2) | [2.9bpw](https://huggingface.co/LoneStriker/miquliz-120b-2.9bpw-h6-exl2) | [4.0bpw](https://huggingface.co/LoneStriker/miquliz-120b-4.0bpw-h6-exl2)
- GGUF: IQ3_XXS | [Q4_K_S+Q4_K_M](https://huggingface.co/NanoByte/miquliz-120b-Q4-GGUF)
- HF: [wolfram/miquliz-120b](https://huggingface.co/wolfram/miquliz-120b)
This is a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit).
Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, [Lone Striker](https://huggingface.co/LoneStriker) and [NanoByte](https://huggingface.co/NanoByte)!
## Prompt template: Mistral
```
<s>[INST] {prompt} [/INST]
```
See also: [🐺🐦⬛ LLM Prompt Format Comparison/Test: Mixtral 8x7B Instruct with **17** different instruct templates : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/18ljvxb/llm_prompt_format_comparisontest_mixtral_8x7b/)
## Model Details
- Max Context: 32768 tokens
- Layers: 137
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
- [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [8, 24]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [17, 32]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [25, 40]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [33, 48]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [41, 56]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [49, 64]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [57, 72]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [65, 80]
model: 152334H/miqu-1-70b-sf
```
## Credits & Special Thanks
- 1st model:
- original (unreleased) model: [mistralai (Mistral AI_)](https://huggingface.co/mistralai)
- leaked model: [miqudev/miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b)
- f16 model: [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
- 2nd model: [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf)
- mergekit: [arcee-ai/mergekit: Tools for merging pretrained large language models.](https://github.com/arcee-ai/mergekit)
- mergekit_config.yml: [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)
- gguf quantization: [ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++](https://github.com/ggerganov/llama.cpp)
### Support
- [My Ko-fi page](https://ko-fi.com/wolframravenwolf) if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!
#### DISCLAIMER: THIS IS [BASED ON A LEAKED ASSET](https://huggingface.co/miqudev/miqu-1-70b/discussions/10) AND HAS NO LICENSE ASSOCIATED WITH IT. USE AT YOUR OWN RISK.
|
IsaacMwesigwa/footballer-recognition-2
|
IsaacMwesigwa
| 2024-02-07T12:30:03Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:footballer-recognition-2/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T12:29:44Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- footballer-recognition-2/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 5.661193370819092
f1_macro: 0.014131400288297163
f1_micro: 0.03746085280655264
f1_weighted: 0.014145017633792991
precision_macro: 0.015760162960355265
precision_micro: 0.03746085280655264
precision_weighted: 0.015775349819387167
recall_macro: 0.03742478941034898
recall_micro: 0.03746085280655264
recall_weighted: 0.03746085280655264
accuracy: 0.03746085280655264
|
alexgastev/Reinforce-PixelCopter_v1
|
alexgastev
| 2024-02-07T12:29:37Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T12:00:12Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopter_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 14.50 +/- 15.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
musiclang/musiclang-chord-v2-4k
|
musiclang
| 2024-02-07T12:28:06Z | 15 | 3 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-29T15:52:56Z |
---
widget:
- text: CHORD_CHANGE
example_title: Predict chord progression
---
MusicLang Chord Predictor model
===============================

MusicLang Chord Predictor is a model for creating original chord scale progressions in the musiclang format with generative AI model.
It can be used for different use cases :
- Predict a chord progression from scratch (a fixed number of chords)
- Continue a chord progression (using a MusicLang prompt)
If you are only looking to generate chord progressions in an easily readable format, consider using [our text chord predictor](https://huggingface.co/musiclang/text-chord-predictor)
To make the prediction we have an inference package available here : [MusicLang Predict](https://github.com/MusicLang/musiclang_predict)
which is based on the musiclang language : [MusicLang](https://github.com/MusicLang/musiclang).
Installation
------------
Install the musiclang-predict package with pip :
```bash
pip install musiclang-predict
```
How to use ?
------------
1. Generate a 4 chords progression in few lines :
```python
from musiclang_predict import predict_chords, MusicLangTokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
from musiclang.library import *
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained('musiclang/musiclang-chord-v2-4k')
tokenizer = AutoTokenizer.from_pretrained('musiclang/musiclang-chord-v2-4k')
soundtrack = predict_chords(model, tokenizer, nb_chords=4, temperature=1.0)
# Give the chord a simple voicing (closed position chord)
soundtrack = soundtrack(b0, b1, b2, b3)
# Save it to midi
soundtrack.to_midi('song.mid', tempo=120, time_signature=(4, 4))
```
2. Use a prompt
```python
from musiclang_predict import predict_chords, MusicLangTokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
from musiclang.library import *
prompt = (I % I.M) + (V % I.M)['6'].o(-1)
# Load model and tokenizer
model = GPT2LMHeadModel.from_pretrained('musiclang/musiclang-chord-v2-4k')
tokenizer = AutoTokenizer.from_pretrained('musiclang/musiclang-chord-v2-4k')
soundtrack = predict_chords(model, tokenizer, nb_chords=4, prompt=prompt)
# Give the chord a simple voicing (closed position chord)
soundtrack = soundtrack(b0, b1, b2, b3)
# Save it to midi
soundtrack.to_midi('song.mid', tempo=120, time_signature=(4, 4))
```
Contact us
----------
If you want to help shape the future of open source music generation,
please contact [us](mailto:[email protected])
License
========
This model is free to use for research and open source purpose only. Please credit me (Florian GARDIN) and musiclang if you do so.
If you would like to use this in a commercial product please contact [us]([email protected]) to discuss licensing terms and potential integration in your product. I am looking forward to hearing about your project !
|
briaai/BRIA-2.2-ControlNet-Canny
|
briaai
| 2024-02-07T12:25:41Z | 19 | 5 |
diffusers
|
[
"diffusers",
"text-to-image",
"controlnet model",
"legal liability",
"commercial use",
"license:other",
"region:us"
] |
text-to-image
| 2024-02-07T10:04:03Z |
---
license: other
license_name: bria-2.2
license_link: https://bria.ai/customer-general-terms-and-conditions
inference: false
tags:
- text-to-image
- controlnet model
- legal liability
- commercial use
extra_gated_prompt: This model weights by BRIA AI can be obtained after a commercial license is agreed upon. Fill in the form below and we reach out to you.
extra_gated_fields:
Name: text
Company/Org name: text
Org Type (Early/Growth Startup, Enterprise, Academy): text
Role: text
Country: text
Email: text
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: checkbox
---
# BRIA 2.2 ControlNet Canny Model Card
[***Click here for Demo***](https://huggingface.co/spaces/briaai/BRIA-2.2-ControlNets)
BRIA 2.2 ControlNet-Canny, trained on the foundation of [BRIA 2.2 Text-to-Image](https://huggingface.co/briaai/BRIA-2.2), enables the generation of high-quality images guided by a textual prompt and the extracted edge map from an input image. This allows for the creation of different variations of an image, all sharing the same geometry.
[BRIA 2.2](https://huggingface.co/briaai/BRIA-2.2) was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.

### Model Description
- **Developed by:** BRIA AI
- **Model type:** [ControlNet](https://huggingface.co/docs/diffusers/using-diffusers/controlnet) for Latent diffusion
- **License:** [bria-2.2](https://bria.ai/bria-huggingface-model-license-agreement/)
- **Model Description:** ControlNet Canny for BRIA 2.2 Text-to-Image model. The model generates images guided by text and the edge map of the conditioned image.
- **Resources for more information:** [BRIA AI](https://bria.ai/)
### Get Access
BRIA 2.2 ControlNet-Canny requires access to BRIA 2.2 Text-to-Image. For more information, [click here](https://huggingface.co/briaai/BRIA-2.2).
### Code example using Diffusers
```
pip install diffusers
```
```py
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline
import torch
controlnet = ControlNetModel.from_pretrained(
"briaai/BRIA-2.2-ControlNet-Canny",
torch_dtype=torch.float16
)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"briaai/BRIA-2.2",
controlnet=controlnet,
torch_dtype=torch.float16,
)
pipe.to("cuda")
prompt = "A portrait of a Beautiful and playful ethereal singer, golden designs, highly detailed, blurry background"
negative_prompt = "Logo,Watermark,Text,Ugly,Morbid,Extra fingers,Poorly drawn hands,Mutation,Blurry,Extra limbs,Gross proportions,Missing arms,Mutated hands,Long neck,Duplicate,Mutilated,Mutilated hands,Poorly drawn face,Deformed,Bad anatomy,Cloned face,Malformed limbs,Missing legs,Too many fingers"
# Calculate Canny image
input_image = cv2.imread('pics/singer.png')
input_image = cv2.Canny(input_image, low_threshold, high_threshold)
input_image = input_image[:, :, None]
input_image = np.concatenate([input_image, input_image, input_image], axis=2)
canny_image = Image.fromarray(image)
image = pipe(prompt=prompt, negative_prompt=negative_prompt, image=canny_image, controlnet_conditioning_scale=1.0, height=1024, width=1024).images[0]
```
|
OctavianB/MistralRoSummary
|
OctavianB
| 2024-02-07T12:23:28Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T12:23:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ffranchina/LeReS
|
ffranchina
| 2024-02-07T12:17:19Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2024-02-07T11:32:36Z |
---
license: unknown
---
These are the weights of the NN used by the (https://github.com/aim-uofa/AdelaiDepth/tree/main/LeReS)[LeReS].
*DISCLAIMER*: I do not own anything, I am just making the trained weights available on a reliable platform.
|
Aneesha/phi2_DPO
|
Aneesha
| 2024-02-07T12:16:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T12:16:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
asadmasad/output-6.7b-26k-ds-test-save-state-no-save-eval-strat
|
asadmasad
| 2024-02-07T12:13:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T11:52:54Z |
---
pipeline_tag: text-generation
---
|
smangrul/sticker_peft_model
|
smangrul
| 2024-02-07T12:10:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T12:10:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/t5-large-bn-adapter-6.34M-snli-model2
|
varun-v-rao
| 2024-02-07T12:09:27Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-large",
"base_model:finetune:google-t5/t5-large",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T04:48:05Z |
---
license: apache-2.0
base_model: t5-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-large-bn-adapter-6.34M-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-bn-adapter-6.34M-snli-model2
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6035
- Accuracy: 0.8075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 59
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.308 | 1.0 | 17168 | 0.2400 | 0.9135 |
| 0.288 | 2.0 | 34336 | 0.2309 | 0.9187 |
| 0.2705 | 3.0 | 51504 | 0.2298 | 0.9216 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
CLMBR/existential-there-quantifier-transformer-4
|
CLMBR
| 2024-02-07T12:01:12Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:12:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2235 | 0.03 | 76320 | 4.1958 |
| 4.0188 | 1.03 | 152640 | 4.0280 |
| 3.91 | 0.03 | 228960 | 3.9539 |
| 3.842 | 1.03 | 305280 | 3.9126 |
| 3.7897 | 0.03 | 381600 | 3.8869 |
| 3.7491 | 1.03 | 457920 | 3.8716 |
| 3.7159 | 0.03 | 534240 | 3.8599 |
| 3.6834 | 1.03 | 610560 | 3.8530 |
| 3.6553 | 0.03 | 686880 | 3.8482 |
| 3.628 | 1.03 | 763200 | 3.8453 |
| 3.605 | 0.03 | 839520 | 3.8447 |
| 3.5866 | 1.03 | 915840 | 3.8442 |
| 3.57 | 0.03 | 992160 | 3.8431 |
| 3.5489 | 1.03 | 1068480 | 3.8447 |
| 3.5349 | 0.03 | 1144800 | 3.8466 |
| 3.5248 | 1.03 | 1221120 | 3.8464 |
| 3.5096 | 0.03 | 1297440 | 3.8480 |
| 3.4935 | 1.03 | 1373760 | 3.8504 |
| 3.4796 | 0.03 | 1450080 | 3.8505 |
| 3.4725 | 1.03 | 1526400 | 3.8529 |
| 3.4618 | 0.03 | 1602720 | 3.8541 |
| 3.4538 | 1.03 | 1679040 | 3.8553 |
| 3.4437 | 0.03 | 1755360 | 3.8561 |
| 3.433 | 1.03 | 1831680 | 3.8574 |
| 3.4159 | 0.03 | 1908000 | 3.8589 |
| 3.4048 | 1.03 | 1984320 | 3.8615 |
| 3.3929 | 0.03 | 2060640 | 3.8618 |
| 3.3857 | 1.03 | 2136960 | 3.8629 |
| 3.3765 | 0.03 | 2213280 | 3.8634 |
| 3.3637 | 0.03 | 2289600 | 3.8657 |
| 3.3528 | 0.03 | 2365920 | 3.8668 |
| 3.3489 | 1.03 | 2442240 | 3.8667 |
| 3.338 | 0.03 | 2518560 | 3.8668 |
| 3.3283 | 1.03 | 2594880 | 3.8668 |
| 3.3179 | 0.03 | 2671200 | 3.8676 |
| 3.3121 | 1.03 | 2747520 | 3.8667 |
| 3.3055 | 0.03 | 2823840 | 3.8658 |
| 3.2992 | 0.03 | 2900160 | 3.8658 |
| 3.2958 | 1.03 | 2976480 | 3.8648 |
| 3.2866 | 0.02 | 3052726 | 3.8637 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLMBR/existential-there-quantifier-transformer-1
|
CLMBR
| 2024-02-07T12:00:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T10:11:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: existential-there-quantifier-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# existential-there-quantifier-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8657
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2241 | 0.03 | 76320 | 4.1976 |
| 4.0185 | 1.03 | 152640 | 4.0288 |
| 3.9098 | 0.03 | 228960 | 3.9549 |
| 3.8424 | 1.03 | 305280 | 3.9139 |
| 3.7897 | 0.03 | 381600 | 3.8885 |
| 3.7495 | 1.03 | 457920 | 3.8726 |
| 3.7173 | 0.03 | 534240 | 3.8620 |
| 3.6848 | 1.03 | 610560 | 3.8554 |
| 3.656 | 0.03 | 686880 | 3.8512 |
| 3.6306 | 1.03 | 763200 | 3.8476 |
| 3.6077 | 0.03 | 839520 | 3.8454 |
| 3.5894 | 1.03 | 915840 | 3.8462 |
| 3.5702 | 0.03 | 992160 | 3.8450 |
| 3.5528 | 1.03 | 1068480 | 3.8456 |
| 3.537 | 0.03 | 1144800 | 3.8472 |
| 3.5234 | 1.03 | 1221120 | 3.8479 |
| 3.5086 | 0.03 | 1297440 | 3.8489 |
| 3.4939 | 1.03 | 1373760 | 3.8503 |
| 3.481 | 0.03 | 1450080 | 3.8515 |
| 3.4736 | 1.03 | 1526400 | 3.8532 |
| 3.4635 | 0.03 | 1602720 | 3.8531 |
| 3.4539 | 0.03 | 1679040 | 3.8541 |
| 3.4447 | 1.03 | 1755360 | 3.8572 |
| 3.4313 | 0.03 | 1831680 | 3.8587 |
| 3.4182 | 0.03 | 1908000 | 3.8596 |
| 3.4054 | 1.03 | 1984320 | 3.8609 |
| 3.3944 | 0.03 | 2060640 | 3.8624 |
| 3.3856 | 1.03 | 2136960 | 3.8638 |
| 3.3773 | 0.03 | 2213280 | 3.8645 |
| 3.3645 | 1.03 | 2289600 | 3.8652 |
| 3.3559 | 0.03 | 2365920 | 3.8659 |
| 3.3475 | 1.03 | 2442240 | 3.8671 |
| 3.3376 | 0.03 | 2518560 | 3.8674 |
| 3.3262 | 1.03 | 2594880 | 3.8677 |
| 3.316 | 0.03 | 2671200 | 3.8670 |
| 3.3108 | 0.03 | 2747520 | 3.8680 |
| 3.3042 | 1.03 | 2823840 | 3.8675 |
| 3.2997 | 0.03 | 2900160 | 3.8669 |
| 3.2947 | 1.03 | 2976480 | 3.8666 |
| 3.2859 | 0.02 | 3052726 | 3.8657 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
veronica1608/my_ner_model
|
veronica1608
| 2024-02-07T11:58:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-07T09:00:58Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: my_ner_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_ner_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2690
- Precision: 0.5545
- Recall: 0.3253
- F1: 0.4100
- Accuracy: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.2829 | 0.5103 | 0.2289 | 0.3161 | 0.9377 |
| No log | 2.0 | 426 | 0.2690 | 0.5545 | 0.3253 | 0.4100 | 0.9420 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
RachidAR/AFlow-SegMoe-1Bx3-v0.1
|
RachidAR
| 2024-02-07T11:55:35Z | 6 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-1.5",
"moe",
"segmoe",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T11:10:40Z |
---
license: apache-2.0
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-1.5
- moe
- segmoe
language:
- en
library_name: diffusers
---
## Warning
This is an experimental model. It works only with segmoe library!
## Experts
- source_model: Lykon/dreamshaper-8 (base)
- source_model: Lykon/AAM_AnyLora_AnimeMix
- source_model: stablediffusionapi/realistic-vision-51
## Usage
This model can be used via the [segmoe](https://github.com/segmind/segmoe) library.
Make sure to install segmoe by running
```bash
pip install segmoe
```
```python
from segmoe import SegMoEPipeline
pipeline = SegMoEPipeline("RachidAR/AFlow-SegMoe-1Bx3-v0.1", device = "cuda", safety_checker = None)
prompt = "cosmic canvas, orange city background, painting of a chubby cat"
negative_prompt = "nsfw, bad quality, worse quality"
img = pipeline(
prompt=prompt,
negative_prompt=negative_prompt,
height=1024,
width=1024,
num_inference_steps=25,
guidance_scale=7.5,
).images[0]
img.save("image.png")
```



|
haturusinghe/subasa-xlm-r
|
haturusinghe
| 2024-02-07T11:46:55Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-02-07T11:43:45Z |
---
library_name: transformers
tags: []
---
Run Details : https://wandb.ai/s-haturusinghe/finetune-after_mrp-with_pipeline-updated/runs/e0cwjvvz/overview?workspace=user-haturusinghe
Run summary:
eval/f1_0 0.87064
eval/f1_1 0.81809
eval/f1_macro 0.84437
eval/f1_weighted 0.8493
eval/loss 0.43726
eval/precision_0 0.88518
eval/precision_1 0.79962
eval/precision_weighted 0.85044
eval/recall_0 0.85657
eval/recall_1 0.83744
eval/recall_weighted 0.8488
eval/runtime 74.0515
eval/samples_per_second 33.76
eval/steps_per_second 2.12
train/epoch 5.0
train/global_step 2345
train/learning_rate 0.0
train/loss 0.2158
train/total_flos 9866664576000000.0
train/train_loss 0.38705
train/train_runtime 3869.0269
train/train_samples_per_second 9.692
train/train_steps_per_second 0.606
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alexgastev/Reinforce-CartPole-v1
|
alexgastev
| 2024-02-07T11:46:53Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T11:46:43Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Maaz911/mistral-Mistral-Finetune-1
|
Maaz911
| 2024-02-07T11:45:43Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T11:44:38Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-Mistral-Finetune-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-Mistral-Finetune-1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0554
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8941 | 0.05 | 25 | 1.7724 |
| 1.8315 | 0.1 | 50 | 1.7261 |
| 1.7522 | 0.14 | 75 | 1.6971 |
| 1.6974 | 0.19 | 100 | 1.6678 |
| 1.7149 | 0.24 | 125 | 1.6430 |
| 1.6037 | 0.29 | 150 | 1.6201 |
| 1.6611 | 0.34 | 175 | 1.6057 |
| 1.7131 | 0.38 | 200 | 1.5854 |
| 1.7619 | 0.43 | 225 | 1.5696 |
| 1.6062 | 0.48 | 250 | 1.5494 |
| 1.5171 | 0.53 | 275 | 1.5284 |
| 1.6484 | 0.58 | 300 | 1.5091 |
| 1.7207 | 0.62 | 325 | 1.4958 |
| 1.6548 | 0.67 | 350 | 1.4817 |
| 1.6447 | 0.72 | 375 | 1.4746 |
| 1.5294 | 0.77 | 400 | 1.4358 |
| 1.6865 | 0.82 | 425 | 1.4269 |
| 1.4704 | 0.87 | 450 | 1.3963 |
| 1.4935 | 0.91 | 475 | 1.3714 |
| 1.4714 | 0.96 | 500 | 1.3496 |
| 1.4913 | 1.01 | 525 | 1.3327 |
| 1.3627 | 1.06 | 550 | 1.3060 |
| 1.2748 | 1.11 | 575 | 1.2857 |
| 1.1856 | 1.15 | 600 | 1.2624 |
| 1.1102 | 1.2 | 625 | 1.2413 |
| 1.2375 | 1.25 | 650 | 1.2214 |
| 1.2421 | 1.3 | 675 | 1.1989 |
| 1.1946 | 1.35 | 700 | 1.1823 |
| 1.2389 | 1.39 | 725 | 1.1674 |
| 1.2961 | 1.44 | 750 | 1.1567 |
| 1.1831 | 1.49 | 775 | 1.1566 |
| 1.2144 | 1.54 | 800 | 1.1326 |
| 1.2881 | 1.59 | 825 | 1.1279 |
| 1.2584 | 1.63 | 850 | 1.1073 |
| 1.2837 | 1.68 | 875 | 1.0878 |
| 1.1251 | 1.73 | 900 | 1.0812 |
| 1.0938 | 1.78 | 925 | 1.0706 |
| 1.0304 | 1.83 | 950 | 1.0636 |
| 1.313 | 1.88 | 975 | 1.0676 |
| 1.2245 | 1.92 | 1000 | 1.0604 |
| 1.1293 | 1.97 | 1025 | 1.0554 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
karthik678/my-cars
|
karthik678
| 2024-02-07T11:33:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-07T11:29:01Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-CARS Dreambooth model trained by karthik678 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 4JK21CV017
Sample pictures of this concept:



|
RMWeerasinghe/t5-small-finetuned-BBCNews_v2
|
RMWeerasinghe
| 2024-02-07T11:17:13Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-02-07T11:14:17Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-BBCNews_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-BBCNews_v2
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3170
- Rouge1: 0.1558
- Rouge2: 0.1263
- Rougel: 0.1483
- Rougelsum: 0.1496
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 75 | 0.4430 | 0.1374 | 0.098 | 0.1257 | 0.1289 |
| No log | 1.99 | 150 | 0.3657 | 0.1466 | 0.1112 | 0.1367 | 0.1388 |
| No log | 2.99 | 225 | 0.3449 | 0.1536 | 0.1222 | 0.145 | 0.147 |
| No log | 3.99 | 300 | 0.3320 | 0.1534 | 0.1226 | 0.1454 | 0.147 |
| 0.609 | 5.0 | 376 | 0.3245 | 0.1534 | 0.1229 | 0.1457 | 0.1472 |
| 0.609 | 6.0 | 451 | 0.3214 | 0.155 | 0.125 | 0.147 | 0.1486 |
| 0.609 | 6.99 | 526 | 0.3181 | 0.1555 | 0.1261 | 0.148 | 0.1496 |
| 0.609 | 7.98 | 600 | 0.3170 | 0.1558 | 0.1263 | 0.1483 | 0.1496 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
athmurikarthik/videomae-base-action_detection
|
athmurikarthik
| 2024-02-07T11:16:11Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-02-06T10:19:23Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-action_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-action_detection
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2662
- Accuracy: 0.7243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 15200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0956 | 0.02 | 305 | 1.3464 | 0.4774 |
| 0.683 | 1.02 | 610 | 2.3774 | 0.3704 |
| 0.5519 | 2.02 | 915 | 2.1501 | 0.3128 |
| 1.5863 | 3.02 | 1220 | 2.7112 | 0.2387 |
| 0.8028 | 4.02 | 1525 | 1.5204 | 0.7037 |
| 1.1797 | 5.02 | 1830 | 2.6479 | 0.2963 |
| 1.185 | 6.02 | 2135 | 0.8982 | 0.7860 |
| 0.9516 | 7.02 | 2440 | 1.2030 | 0.6008 |
| 0.5755 | 8.02 | 2745 | 0.8003 | 0.8189 |
| 0.6815 | 9.02 | 3050 | 2.3653 | 0.4198 |
| 1.1649 | 10.02 | 3355 | 3.0645 | 0.4403 |
| 1.1024 | 11.02 | 3660 | 2.4187 | 0.4321 |
| 1.1158 | 12.02 | 3965 | 2.2631 | 0.5597 |
| 0.2375 | 13.02 | 4270 | 2.2977 | 0.5432 |
| 0.7445 | 14.02 | 4575 | 1.0086 | 0.7860 |
| 0.6555 | 15.02 | 4880 | 0.7161 | 0.8560 |
| 0.8807 | 16.02 | 5185 | 1.2404 | 0.6584 |
| 1.0477 | 17.02 | 5490 | 1.6849 | 0.6173 |
| 0.498 | 18.02 | 5795 | 2.0557 | 0.5844 |
| 0.5536 | 19.02 | 6100 | 2.0703 | 0.5967 |
| 0.2232 | 20.02 | 6405 | 2.7690 | 0.4856 |
| 0.5589 | 21.02 | 6710 | 0.9549 | 0.7243 |
| 0.3377 | 22.02 | 7015 | 0.6488 | 0.8189 |
| 0.7096 | 23.02 | 7320 | 1.6638 | 0.5556 |
| 0.1201 | 24.02 | 7625 | 1.6283 | 0.5761 |
| 0.136 | 25.02 | 7930 | 1.4397 | 0.5926 |
| 0.2558 | 26.02 | 8235 | 1.7421 | 0.5350 |
| 0.3245 | 27.02 | 8540 | 1.2982 | 0.6132 |
| 0.0029 | 28.02 | 8845 | 1.0594 | 0.7202 |
| 0.3272 | 29.02 | 9150 | 1.0833 | 0.8272 |
| 0.0841 | 30.02 | 9455 | 1.3230 | 0.5926 |
| 0.5595 | 31.02 | 9760 | 2.5545 | 0.5844 |
| 0.0837 | 32.02 | 10065 | 1.5960 | 0.6296 |
| 0.0127 | 33.02 | 10370 | 1.8149 | 0.5720 |
| 0.3622 | 34.02 | 10675 | 2.4455 | 0.4938 |
| 0.0006 | 35.02 | 10980 | 1.6700 | 0.6461 |
| 0.0027 | 36.02 | 11285 | 2.2488 | 0.5720 |
| 0.0544 | 37.02 | 11590 | 2.6388 | 0.5514 |
| 0.2504 | 38.02 | 11895 | 1.5352 | 0.6379 |
| 0.0149 | 39.02 | 12200 | 2.2851 | 0.5391 |
| 0.4035 | 40.02 | 12505 | 1.8876 | 0.5556 |
| 0.0008 | 41.02 | 12810 | 2.4479 | 0.5473 |
| 0.3176 | 42.02 | 13115 | 2.0729 | 0.6049 |
| 0.0007 | 43.02 | 13420 | 1.5171 | 0.6255 |
| 0.3948 | 44.02 | 13725 | 1.4067 | 0.6132 |
| 0.0016 | 45.02 | 14030 | 1.0621 | 0.7325 |
| 0.2173 | 46.02 | 14335 | 1.5515 | 0.6132 |
| 0.0007 | 47.02 | 14640 | 1.2523 | 0.7284 |
| 0.2819 | 48.02 | 14945 | 1.5618 | 0.6461 |
| 0.0004 | 49.02 | 15200 | 1.2662 | 0.7243 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
deakpatrik05/aziv1
|
deakpatrik05
| 2024-02-07T11:12:42Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-02-07T11:12:42Z |
---
license: other
license_name: rvc
license_link: LICENSE
---
|
surya47/medclip-roco
|
surya47
| 2024-02-07T10:54:57Z | 2 | 2 |
transformers
|
[
"transformers",
"jax",
"hybrid-clip",
"medical",
"code",
"visual-question-answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-question-answering
| 2024-02-07T05:26:24Z |
---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: visual-question-answering
tags:
- medical
- code
---
|
dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl-GGUF
|
dariolopez
| 2024-02-07T10:52:06Z | 0 | 0 | null |
[
"es",
"license:apache-2.0",
"region:us"
] | null | 2023-09-05T07:40:24Z |
---
license: apache-2.0
language:
- es
---
Llama 2 (7B) fine-tuned on a [own Spanish instructions dataset](https://huggingface.co/datasets/dariolopez/Llama-2-databricks-dolly-oasst1-es).
On this repo you can find 4-bit and 5-bit quantized versions of the [Llama 2 (7B) Spanish fine-tuned](https://huggingface.co/dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl).
# How to use
```sh
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp && git pull && make clean && make
git clone https://huggingface.co/dariolopez/Llama-2-databricks-dolly-oasst1-es-axolotl-GGUF
./main -m ./llama-2-databricks-dolly-oasst1-es-axolotl.gguf.q4_k_m.bin -n 2048 --color --temp 0 -ngl 35 -p "<s>[INST] Describe 5 lugares para visitar en España: [/INST]"
```
# Based on
https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html
|
matr1xx/scibert_scivocab_uncased-finetuned-mol-mlm-0.3-5epochs
|
matr1xx
| 2024-02-07T10:47:31Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"base_model:finetune:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-07T10:38:53Z |
---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
model-index:
- name: scibert_scivocab_uncased-finetuned-mol-mlm-0.3-5epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_scivocab_uncased-finetuned-mol-mlm-0.3-5epochs
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.8759 | 1.0 | 180 | 0.6795 |
| 0.6773 | 2.0 | 360 | 0.6306 |
| 0.6255 | 3.0 | 540 | 0.5880 |
| 0.5912 | 4.0 | 720 | 0.5707 |
| 0.5783 | 5.0 | 900 | 0.5724 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Federic/CDAgpt-llama-13b-v3
|
Federic
| 2024-02-07T10:39:17Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"base_model:finetune:meta-llama/Llama-2-13b-hf",
"region:us"
] | null | 2024-02-07T08:37:50Z |
---
base_model: meta-llama/Llama-2-13b-hf
tags:
- generated_from_trainer
model-index:
- name: CDAgpt-llama-13b-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CDAgpt-llama-13b-v3
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
llmware/slim-ratings-tool
|
llmware
| 2024-02-07T10:37:33Z | 71 | 3 |
transformers
|
[
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-24T17:03:40Z |
---
license: apache-2.0
---
# SLIM-RATINGS
<!-- Provide a quick summary of what the model is/does. -->
**slim-ratings-tool** is a 4_K_M quantized GGUF version of slim-sentiment, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-ratings**](https://huggingface.co/llmware/slim-ratings) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-ratings-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-ratings-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-ratings-tool", verbose=True)
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("ratings")
response = llm_fx.ratings(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-ratings-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
aanaya/rare-puppers
|
aanaya
| 2024-02-07T10:37:32Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T09:46:25Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.21568627655506134
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Abelmoschus esculentus leaves

#### Cannabis sativa leaves

#### Crotalaria juncea leaves

#### Jatropha multifida leaves

#### Tagetes minuta leaves

|
SamiaNasrin/NlpGroup21
|
SamiaNasrin
| 2024-02-07T10:36:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-02-07T10:35:47Z |
# NlpGroup21 Model Repository
Welcome to the NlpGroup21 model repository! This repository contains the model and related files for our project
|
ramsi-k/rl_course_vizdoom_health_gathering_supreme
|
ramsi-k
| 2024-02-07T10:27:38Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T10:27:26Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 10.28 +/- 4.04
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r ramsi-k/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
llmware/slim-intent-tool
|
llmware
| 2024-02-07T10:24:20Z | 70 | 4 |
transformers
|
[
"transformers",
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-04T21:55:25Z |
---
license: apache-2.0
---
# SLIM-INTENT-TOOL
<!-- Provide a quick summary of what the model is/does. -->
**slim-intent-tool** is a 4_K_M quantized GGUF version of slim-intent, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-intent**](https://huggingface.co/llmware/slim-intent) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-intent-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-intent-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-intent-tool", verbose=True)
Slim models can also orchestrated as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("intent")
response = llm_fx.intent(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-intent-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
llmware/slim-intent
|
llmware
| 2024-02-07T10:20:35Z | 11 | 9 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-02-04T21:54:57Z |
---
license: apache-2.0
inference: false
---
# SLIM-INTENT
<!-- Provide a quick summary of what the model is/does. -->
**slim-intent** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.
slim-intent has been fine-tuned for **intent analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
`{"intent": ["complaint"]}`
SLIM models are designed to generate structured output that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
Each slim model has a 'quantized tool' version, e.g., [**'slim-intent-tool'**](https://huggingface.co/llmware/slim-intent-tool).
## Prompt format:
`function = "classify"`
`params = "intent"`
`prompt = "<human> " + {text} + "\n" + `
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
<details>
<summary>Transformers Script </summary>
model = AutoModelForCausalLM.from_pretrained("llmware/slim-intent")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-intent")
function = "classify"
params = "intent"
text = "I am really impressed with the quality of the product and the service that I have received so far."
prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"
inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])
outputs = model.generate(
inputs.input_ids.to('cpu'),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100
)
output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)
print("output only: ", output_only)
# here's the fun part
try:
output_only = ast.literal_eval(llm_string_output)
print("success - converted to python dictionary automatically")
except:
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
</details>
<details>
<summary>Using as Function Call in LLMWare</summary>
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-intent")
response = slim_model.function_call(text,params=["intent"], function="classify")
print("llmware - llm_response: ", response)
</details>
## Model Card Contact
Darren Oberst & llmware team
[Join us on Discord](https://discord.gg/MhZn5Nc39h)
|
sajaw/AntModel-7B-XLLM-Demo-LoRA
|
sajaw
| 2024-02-07T10:15:06Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:alexsherstinsky/Mistral-7B-v0.1-sharded",
"base_model:adapter:alexsherstinsky/Mistral-7B-v0.1-sharded",
"region:us"
] | null | 2024-02-07T10:14:53Z |
---
library_name: peft
base_model: alexsherstinsky/Mistral-7B-v0.1-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
silvering/vit-emotions-classification-fp16
|
silvering
| 2024-02-07T10:14:13Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T09:52:00Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotions-fp16
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.92875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotions-fp16
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3314
- Accuracy: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 1.7532 | 0.4263 |
| No log | 2.0 | 100 | 1.4569 | 0.535 |
| No log | 3.0 | 150 | 1.3329 | 0.5262 |
| No log | 4.0 | 200 | 1.1306 | 0.6475 |
| No log | 5.0 | 250 | 1.0279 | 0.7275 |
| No log | 6.0 | 300 | 0.8815 | 0.7863 |
| No log | 7.0 | 350 | 0.7592 | 0.8337 |
| No log | 8.0 | 400 | 0.7329 | 0.785 |
| No log | 9.0 | 450 | 0.6043 | 0.875 |
| 1.1234 | 10.0 | 500 | 0.5688 | 0.8612 |
| 1.1234 | 11.0 | 550 | 0.5193 | 0.88 |
| 1.1234 | 12.0 | 600 | 0.4879 | 0.8938 |
| 1.1234 | 13.0 | 650 | 0.4170 | 0.9038 |
| 1.1234 | 14.0 | 700 | 0.4425 | 0.8912 |
| 1.1234 | 15.0 | 750 | 0.4089 | 0.905 |
| 1.1234 | 16.0 | 800 | 0.3781 | 0.9263 |
| 1.1234 | 17.0 | 850 | 0.3431 | 0.9225 |
| 1.1234 | 18.0 | 900 | 0.3388 | 0.93 |
| 1.1234 | 19.0 | 950 | 0.2973 | 0.9475 |
| 0.3972 | 20.0 | 1000 | 0.3314 | 0.9287 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
ramsi-k/poca-SoccerTwos
|
ramsi-k
| 2024-02-07T10:12:56Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-02-07T10:11:56Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ramsi-k/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Arozhada/dqn-SpaceInvadersNoFrameskip-v4
|
Arozhada
| 2024-02-07T10:08:15Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T10:07:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 660.00 +/- 215.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Arozhada -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Arozhada -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Arozhada
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
chenhaodev/solar-10b-ocn-v1
|
chenhaodev
| 2024-02-07T10:01:49Z | 3 | 1 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-v1.0",
"license:other",
"region:us"
] | null | 2024-02-07T09:12:23Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: upstage/SOLAR-10.7B-v1.0
model-index:
- name: solar-10b-ocn-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# solar-10b-ocn-v1
This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0 on the oncc_medqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training script
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py --stage sft --do_train True --model_name_or_path upstage/SOLAR-10.7B-v1.0 --template solar --finetuning_type lora --quantization_bit 4 --flash_attn True --dataset_dir data --dataset oncc_medqa_instruct --cutoff_len 1024 --learning_rate 0.0005 --num_train_epochs 1.0 --max_samples 5000 --per_device_train_batch_size 4 --gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 100 --warmup_steps 10 --neftune_noise_alpha 0.5 --lora_rank 8 --lora_dropout 0.2 --lora_target wqkv --output_dir /workspace/solar-10b-ocn-v1 --fp16 True --plot_loss True
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
Test script:
lm_eval --model hf --model_args pretrained=upstage/SOLAR-10.7B-v1.0,peft=chenhugging/solar-10b-ocn-v1,trust_remote_code=True,parallelize=True,load_in_4bit=True --tasks ocn,aocnp,medmcqa,pubmedqa,mmlu_clinical_knowledge,mmlu_college_medicine,mmlu_professional_medicine --device cuda:0 --limit 100
hf (pretrained=upstage/SOLAR-10.7B-v1.0,peft=chenhugging/solar-10b-ocn-v1,trust_remote_code=True,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.95|± |0.0219|
|medmcqa |Yaml |none | 0|acc | 0.42|± |0.0496|
|professional_medicine| 0|none | 0|acc | 0.72|± |0.0451|
|college_medicine | 0|none | 0|acc | 0.67|± |0.0473|
|clinical_knowledge | 0|none | 0|acc | 0.64|± |0.0482|
|ocn |Yaml |none | 0|acc | 0.83|± |0.0378|
|aocnp |Yaml |none | 0|acc | 0.72|± |0.0451|
|
danaleee/CL_rank10_iter800
|
danaleee
| 2024-02-07T09:59:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T08:25:51Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks teddybear
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/CL_rank10_iter800
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ramsi-k/LunarLander-v2-fromscratch-tune
|
ramsi-k
| 2024-02-07T09:56:52Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T09:51:41Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -194.56 +/- 121.41
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.001
'num_envs': 64
'num_steps': 32
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'ramsi-k/LunarLander-v2-fromscratch-tune'
'batch_size': 2048
'minibatch_size': 512}
```
|
Pankaj001/Flower-Dataset-Resnet50-180
|
Pankaj001
| 2024-02-07T09:54:02Z | 0 | 0 |
tf-keras
|
[
"tf-keras",
"image-classification",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2024-01-18T08:47:21Z |
---
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-classification
---
# ResNet-50 Model for Flower Classification
This model is based on the ResNet-50 architecture and has been trained on a dataset of flower images.
## Model Details
- **Architecture**: ResNet-50
- **Input Size**: 180x180 pixels with 3 channels (RGB)
- **Data Preprocessing**: The model has been trained on normalized data.
- **Model Accuracy**: 80%
-
## Usage
You can use this model for flower image classification tasks. Below are some code snippets to help you get started:
flowers_url: "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
---
license: apache-2.0
language:
- en
library_name: keras
---
|
romil9/rvctraintest
|
romil9
| 2024-02-07T09:51:46Z | 0 | 0 | null |
[
"onnx",
"license:other",
"region:us"
] | null | 2024-02-07T06:35:36Z |
---
license: other
license_name: test
license_link: LICENSE
---
|
magus4450/speecht5_finetuned_voxpopuli_cs
|
magus4450
| 2024-02-07T09:42:35Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2024-02-07T06:06:45Z |
---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
model-index:
- name: speecht5_finetuned_voxpopuli_cs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_cs
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the facebook/voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4831 | 7.14 | 1000 | 0.4424 |
| 0.468 | 14.27 | 2000 | 0.4310 |
| 0.4568 | 21.41 | 3000 | 0.4267 |
| 0.4604 | 28.55 | 4000 | 0.4251 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.2
- Datasets 2.14.7
- Tokenizers 0.15.0
|
DrishtiSharma/mixtral-8x7b-v0.1-english-to-hinglish-translation-merged
|
DrishtiSharma
| 2024-02-07T09:39:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-07T09:34:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hoanghoavienvo/roberta-base-detect-cheapfake-ca1-ca2
|
hoanghoavienvo
| 2024-02-07T09:36:29Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T09:32:30Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-base-detect-cheapfake-ca1-ca2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-ca1-ca2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1482
- Accuracy: 0.94
- F1: 0.9450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 38 | 0.6724 | 0.705 | 0.7807 |
| No log | 2.0 | 76 | 0.5437 | 0.925 | 0.9309 |
| No log | 3.0 | 114 | 0.1945 | 0.93 | 0.9340 |
| No log | 4.0 | 152 | 0.1559 | 0.94 | 0.9444 |
| No log | 5.0 | 190 | 0.1482 | 0.94 | 0.9450 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF
|
MaziyarPanahi
| 2024-02-07T09:36:23Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"7b",
"lazymergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"Sao10K/NyakuraV2.1-m7",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp",
"base_model:quantized:MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp",
"conversational"
] |
text-generation
| 2024-01-24T14:03:24Z |
---
license: apache-2.0
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- 7b
- lazymergekit
- mistralai/Mistral-7B-Instruct-v0.2
- Sao10K/NyakuraV2.1-m7
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF
base_model: MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp)
## Description
[MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) contains GGUF format model files for [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) and below it, a specific filename to download, such as: NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF](https://huggingface.co/MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./NyakuraV2.1-m7-Mistral-7B-Instruct-v0.2-slerp-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
|
LoneStriker/Senku-70B-Full-GGUF
|
LoneStriker
| 2024-02-07T09:32:34Z | 21 | 13 | null |
[
"gguf",
"license:cc-by-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T06:42:07Z |
---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
CLMBR/det-noun-lstm-1
|
CLMBR
| 2024-02-07T09:28:50Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-02-01T11:59:17Z |
---
tags:
- generated_from_trainer
model-index:
- name: det-noun-lstm-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-lstm-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8048 | 0.03 | 76320 | 4.7692 |
| 4.5159 | 1.03 | 152640 | 4.4852 |
| 4.3691 | 0.03 | 228960 | 4.3476 |
| 4.2797 | 1.03 | 305280 | 4.2637 |
| 4.2204 | 0.03 | 381600 | 4.2065 |
| 4.1733 | 1.03 | 457920 | 4.1648 |
| 4.1326 | 0.03 | 534240 | 4.1336 |
| 4.0967 | 1.03 | 610560 | 4.1082 |
| 4.0679 | 0.03 | 686880 | 4.0879 |
| 4.0421 | 1.03 | 763200 | 4.0721 |
| 4.0218 | 0.03 | 839520 | 4.0580 |
| 4.0062 | 1.03 | 915840 | 4.0475 |
| 3.9891 | 0.03 | 992160 | 4.0381 |
| 3.9682 | 0.03 | 1068480 | 4.0299 |
| 3.9583 | 1.03 | 1144800 | 4.0224 |
| 3.9536 | 0.03 | 1221120 | 4.0173 |
| 3.9398 | 1.03 | 1297440 | 4.0119 |
| 3.9296 | 0.03 | 1373760 | 4.0071 |
| 3.9182 | 1.03 | 1450080 | 4.0036 |
| 3.9138 | 0.03 | 1526400 | 4.0002 |
| 3.9124 | 1.03 | 1602720 | 3.9966 |
| 3.9072 | 0.03 | 1679040 | 3.9941 |
| 3.9015 | 1.03 | 1755360 | 3.9915 |
| 3.8912 | 0.03 | 1831680 | 3.9895 |
| 3.8851 | 1.03 | 1908000 | 3.9876 |
| 3.8767 | 0.03 | 1984320 | 3.9853 |
| 3.8708 | 0.03 | 2060640 | 3.9833 |
| 3.8676 | 1.03 | 2136960 | 3.9817 |
| 3.8631 | 0.03 | 2213280 | 3.9802 |
| 3.8513 | 1.03 | 2289600 | 3.9791 |
| 3.8494 | 0.03 | 2365920 | 3.9776 |
| 3.8548 | 1.03 | 2442240 | 3.9767 |
| 3.8471 | 0.03 | 2518560 | 3.9757 |
| 3.8443 | 0.03 | 2594880 | 3.9748 |
| 3.8389 | 1.03 | 2671200 | 3.9741 |
| 3.8405 | 0.03 | 2747520 | 3.9735 |
| 3.8435 | 1.03 | 2823840 | 3.9728 |
| 3.844 | 0.03 | 2900160 | 3.9724 |
| 3.8434 | 0.03 | 2976480 | 3.9719 |
| 3.8385 | 0.02 | 3052726 | 3.9717 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EnDevSols/tinyllama-3T-64k-JSONExtractor
|
EnDevSols
| 2024-02-07T09:27:43Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T09:26:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JackCloudman/Senku-70B-Full-exl2-3.5bpw
|
JackCloudman
| 2024-02-07T09:27:26Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T08:04:36Z |
---
license: cc-by-2.0
---
Finetune of miqu-70b-sf dequant of miqudev's leak of Mistral-70B (allegedly an early mistral medium). My diffs are available under CC-0, this is a merge with the leaked model, you can use the other repository to save bandwidth.
EQ-Bench: 84.89
Will run more benches later.
|
yeye776/OndeviceAI-base-v2
|
yeye776
| 2024-02-07T09:18:42Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:paust/pko-t5-base",
"base_model:finetune:paust/pko-t5-base",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T09:18:11Z |
---
license: cc-by-4.0
base_model: paust/pko-t5-base
tags:
- generated_from_trainer
model-index:
- name: OndeviceAI-base-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OndeviceAI-base-v2
This model is a fine-tuned version of [paust/pko-t5-base](https://huggingface.co/paust/pko-t5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
s3nh/Kunocchini-7b-128k-test-GGUF
|
s3nh
| 2024-02-07T09:17:12Z | 12 | 6 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-02-07T08:48:23Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/Test157t/Kunocchini-7b-128k-test).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
Assistant: Quantization is a process used in signal processing, physics, and mathematics to convert continuous signals or data into discrete values. It's often used in digital systems to represent analog signals or phenomena, allowing for efficient storage, transmission, and processing of information. In the context of audio, video, or images, quantization converts the infinite range of possible signal values into a finite number of levels that can be represented using a certain number of bits.
To understand this better, let's consider an example with audio. Sound waves are continuous signals, and when we record them, we need to
# Original model card
|
phamtungthuy/law_model_merged
|
phamtungthuy
| 2024-02-07T09:07:12Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mpt",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T09:05:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
danaleee/CL_rank4
|
danaleee
| 2024-02-07T09:06:48Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-02-07T08:18:39Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks teddybear
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - danaleee/CL_rank4
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks teddybear using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
varun-v-rao/roberta-base-bn-adapter-895K-snli-model2
|
varun-v-rao
| 2024-02-07T08:56:59Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"region:us"
] | null | 2024-02-07T08:09:03Z |
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-bn-adapter-895K-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bn-adapter-895K-snli-model2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7648
- Accuracy: 0.7315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4332 | 1.0 | 8584 | 0.3469 | 0.8699 |
| 0.4008 | 2.0 | 17168 | 0.3200 | 0.8780 |
| 0.3889 | 3.0 | 25752 | 0.3143 | 0.8805 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mtgv/MobileVLM_V2-7B
|
mtgv
| 2024-02-07T08:55:39Z | 106 | 5 |
transformers
|
[
"transformers",
"pytorch",
"mobilevlm",
"text-generation",
"MobileVLM V2",
"arxiv:2402.03766",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T09:16:05Z |
---
license: apache-2.0
tags:
- MobileVLM V2
---
## Model Summery
MobileVLM V2 is a family of significantly improved vision language models upon MobileVLM, which proves that a delicate orchestration of novel architectural design, an improved training scheme tailored for mobile VLMs, and rich high-quality dataset curation can substantially benefit VLMs’ performance. Specifically, MobileVLM V2 1.7B achieves better or on-par performance on standard VLM benchmarks compared with much larger VLMs at the 3B scale. Notably, MobileVLM_V2-3B model outperforms a large variety of VLMs at the 7B+ scale.
The MobileVLM_V2-7B was built on [Vicuna-7B-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) to facilitate the off-the-shelf deployment.
## Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: [MobileVLM V2: Faster and Stronger Baseline for Vision Language Model](https://arxiv.org/abs/2402.03766)
## How to Get Started with the Model
Inference examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).
|
finalyear2023/virat-kholi
|
finalyear2023
| 2024-02-07T08:54:34Z | 1 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-07T08:54:29Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of virat kholi,
license: openrail++
---
# SDXL LoRA DreamBooth - finalyear2023/virat-kholi
<Gallery />
## Model description
These are finalyear2023/virat-kholi LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of virat kholi, to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](finalyear2023/virat-kholi/tree/main) them in the Files & versions tab.
|
varun-v-rao/opt-1.3b-lora-3.15M-snli-model3
|
varun-v-rao
| 2024-02-07T08:47:47Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T02:16:53Z |
---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-1.3b-lora-3.15M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b-lora-3.15M-snli-model3
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6832
- Accuracy: 0.761
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 49
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3553 | 1.0 | 4292 | 0.2816 | 0.8942 |
| 0.3227 | 2.0 | 8584 | 0.2643 | 0.9043 |
| 0.3151 | 3.0 | 12876 | 0.2574 | 0.9076 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
luping-liu/Detector_Guidance
|
luping-liu
| 2024-02-07T08:34:45Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T08:31:10Z |
---
license: apache-2.0
---
# Detector Guidance for Multi-Object Text-to-Image Generation
by [Luping Liu](https://luping-liu.github.io/)<sup>1</sup>, Zijian Zhang<sup>1</sup>, [Yi Ren](https://rayeren.github.io/)<sup>2</sup>, Rongjie Huang<sup>1</sup>, Zhou Zhao<sup>1</sup>.
<sup>1</sup>Zhejiang University, <sup>2</sup>ByteDance
In this work, we introduce Detector Guidance (DG), which integrates a latent object detection model to separate different objects during the generation process. More precisely, DG first performs latent object detection on cross-attention maps (CAMs) to obtain object information. Based on this information, DG then masks conflicting prompts and enhances related prompts by manipulating the following CAMs. Human evaluations demonstrate that DG provides an 8-22% advantage in preventing the amalgamation of conflicting concepts and ensuring that each object possesses its unique region without any human involvement and additional iterations.
|
mach-12/t5-small-finetuned-mlsum-de
|
mach-12
| 2024-02-07T08:34:36Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-07T02:59:32Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-mlsum-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-mlsum-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6917
- Rouge1: 25.924
- Rouge2: 17.2398
- Rougel: 24.0239
- Rougelsum: 24.6845
- Gen Len: 18.9879
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.9987 | 1.0 | 6899 | 1.7184 | 25.6352 | 17.0364 | 23.7635 | 24.4065 | 18.9903 |
| 0.9624 | 2.0 | 13798 | 1.6996 | 25.8132 | 17.1732 | 23.9131 | 24.5744 | 18.9885 |
| 0.9902 | 3.0 | 20697 | 1.6917 | 25.924 | 17.2398 | 24.0239 | 24.6845 | 18.9879 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
mikeee/phi-2-ft-evol-instruct-chinese-gpt4
|
mikeee
| 2024-02-07T08:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T08:33:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mikeee/phi-2-ft
|
mikeee
| 2024-02-07T08:33:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-02-07T08:32:57Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-ft
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
smrynrz20/custom_q_and_a
|
smrynrz20
| 2024-02-07T08:26:32Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T08:26:05Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: custom_q_and_a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# custom_q_and_a
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
JiAYu1997/LLM_Practice001
|
JiAYu1997
| 2024-02-07T08:26:01Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-23T01:35:51Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: LLM_Practice001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM_Practice001
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5112
- Matthews Correlation: 0.5305
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.9805771852415407e-05
- train_batch_size: 64
- eval_batch_size: 16
- seed: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4761 | 0.4471 |
| No log | 2.0 | 268 | 0.4733 | 0.5052 |
| No log | 3.0 | 402 | 0.5112 | 0.5305 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
muzammil-eds/tinyllama-3T-64k-JSONExtractor-v4
|
muzammil-eds
| 2024-02-07T08:22:45Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T08:21:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
empty-michael/tinystories_1layer_attn_mlp_C10k_k100
|
empty-michael
| 2024-02-07T08:05:58Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codebook",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"base_model:roneneldan/TinyStories-1Layer-21M",
"base_model:finetune:roneneldan/TinyStories-1Layer-21M",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T04:43:01Z |
---
base_model: roneneldan/TinyStories-1Layer-21M
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: tinystories_1layer_attn_mlp_C10k_k100
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.5429091526514649
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinystories_1layer_attn_mlp_C10k_k100
This model is a fine-tuned version of [roneneldan/TinyStories-1Layer-21M](https://huggingface.co/roneneldan/TinyStories-1Layer-21M) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8957
- Accuracy: 0.5429
- Multicode K: 1
- Dead Code Fraction/layer0: 0.0
- Mse/layer0: 611.1572
- Input Norm/layer0: 31.9975
- Output Norm/layer0: 15.0872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Multicode K | Dead Code Fraction/layer0 | Mse/layer0 | Input Norm/layer0 | Output Norm/layer0 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------:|:-------------------------:|:----------:|:-----------------:|:------------------:|
| 2.5072 | 0.05 | 500 | 2.4764 | 0.4579 | 1 | 0.0 | 841.1602 | 31.9977 | 4.9114 |
| 2.2285 | 0.1 | 1000 | 2.2265 | 0.4926 | 1 | 0.0 | 792.3023 | 31.9980 | 7.5524 |
| 2.1472 | 0.16 | 1500 | 2.1584 | 0.5025 | 1 | 0.0 | 761.8683 | 31.9980 | 8.9239 |
| 2.1144 | 0.21 | 2000 | 2.1128 | 0.5090 | 1 | 0.0 | 737.1843 | 31.9979 | 9.8992 |
| 2.0847 | 0.26 | 2500 | 2.0791 | 0.5142 | 1 | 0.0 | 716.9390 | 31.9979 | 10.6577 |
| 2.0439 | 0.31 | 3000 | 2.0482 | 0.5185 | 1 | 0.0 | 698.7266 | 31.9979 | 11.3599 |
| 2.0263 | 0.37 | 3500 | 2.0253 | 0.5224 | 1 | 0.0 | 682.2680 | 31.9979 | 12.0105 |
| 1.9906 | 0.42 | 4000 | 2.0066 | 0.5253 | 1 | 0.0 | 669.1965 | 31.9979 | 12.5568 |
| 1.9852 | 0.47 | 4500 | 1.9898 | 0.5279 | 1 | 0.0 | 657.5872 | 31.9979 | 13.0526 |
| 1.9687 | 0.52 | 5000 | 1.9757 | 0.5300 | 1 | 0.0 | 648.2462 | 31.9979 | 13.4496 |
| 1.9672 | 0.57 | 5500 | 1.9620 | 0.5321 | 1 | 0.0 | 640.0822 | 31.9978 | 13.8078 |
| 1.9441 | 0.63 | 6000 | 1.9513 | 0.5339 | 1 | 0.0 | 633.8831 | 31.9978 | 14.1018 |
| 1.9408 | 0.68 | 6500 | 1.9397 | 0.5358 | 1 | 0.0 | 628.0929 | 31.9977 | 14.3550 |
| 1.9256 | 0.73 | 7000 | 1.9302 | 0.5374 | 1 | 0.0 | 623.2726 | 31.9977 | 14.5534 |
| 1.9204 | 0.78 | 7500 | 1.9225 | 0.5381 | 1 | 0.0 | 619.4573 | 31.9977 | 14.7258 |
| 1.907 | 0.84 | 8000 | 1.9150 | 0.5393 | 1 | 0.0 | 616.4379 | 31.9976 | 14.8625 |
| 1.8931 | 0.89 | 8500 | 1.9076 | 0.5408 | 1 | 0.0 | 613.7874 | 31.9976 | 14.9685 |
| 1.9021 | 0.94 | 9000 | 1.9021 | 0.5417 | 1 | 0.0 | 612.0126 | 31.9975 | 15.0379 |
| 1.8967 | 0.99 | 9500 | 1.8970 | 0.5426 | 1 | 0.0 | 610.6121 | 31.9975 | 15.0932 |
| 1.8942 | 1.04 | 10000 | 1.8957 | 0.5429 | 1 | 0.0 | 611.1572 | 31.9975 | 15.0872 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.