modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
kavg/LiLT-RE-JA
|
kavg
| 2024-02-07T08:00:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lilt",
"generated_from_trainer",
"dataset:xfun",
"base_model:nielsr/lilt-xlm-roberta-base",
"base_model:finetune:nielsr/lilt-xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T07:58:19Z |
---
license: mit
base_model: nielsr/lilt-xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xfun
metrics:
- precision
- recall
- f1
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) on the xfun dataset.
It achieves the following results on the evaluation set:
- Precision: 0.4372
- Recall: 0.6574
- F1: 0.5252
- Loss: 0.0001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:-----:|:------:|:---------------:|:---------:|:------:|
| 0.1954 | 20.0 | 500 | 0 | 0.4094 | 0 | 0 |
| 0.1588 | 40.0 | 1000 | 0.1420 | 0.3055 | 0.3587 | 0.0886 |
| 0.1182 | 60.0 | 1500 | 0.4253 | 0.1384 | 0.3810 | 0.4812 |
| 0.0477 | 80.0 | 2000 | 0.4764 | 0.0216 | 0.3949 | 0.6002 |
| 0.069 | 100.0 | 2500 | 0.5198 | 0.0115 | 0.4564 | 0.6038 |
| 0.0355 | 120.0 | 3000 | 0.5161 | 0.0018 | 0.4271 | 0.6521 |
| 0.0268 | 140.0 | 3500 | 0.5254 | 0.0016 | 0.4395 | 0.6530 |
| 0.0123 | 160.0 | 4000 | 0.5264 | 0.0015 | 0.4382 | 0.6592 |
| 0.0039 | 180.0 | 4500 | 0.5353 | 0.0011 | 0.4510 | 0.6583 |
| 0.0139 | 200.0 | 5000 | 0.5390 | 0.0011 | 0.4533 | 0.6646 |
| 0.001 | 220.0 | 5500 | 0.5430 | 0.0042 | 0.4620 | 0.6583 |
| 0.01 | 240.0 | 6000 | 0.5347 | 0.0013 | 0.4531 | 0.6521 |
| 0.0065 | 260.0 | 6500 | 0.5404 | 0.0001 | 0.4540 | 0.6673 |
| 0.0046 | 280.0 | 7000 | 0.5252 | 0.0001 | 0.4372 | 0.6574 |
| 0.002 | 300.0 | 7500 | 0.5365 | 0.0007 | 0.4474 | 0.6699 |
| 0.0002 | 320.0 | 8000 | 0.5393 | 0.0002 | 0.4546 | 0.6628 |
| 0.0008 | 340.0 | 8500 | 0.5412 | 0.0002 | 0.4569 | 0.6637 |
| 0.0024 | 360.0 | 9000 | 0.4677 | 0.6601 | 0.5475 | 0.0002 |
| 0.0001 | 380.0 | 9500 | 0.4560 | 0.6673 | 0.5418 | 0.0002 |
| 0.002 | 400.0 | 10000 | 0.4594 | 0.6628 | 0.5427 | 0.0003 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
bianxg/q-Taxi-v3
|
bianxg
| 2024-02-07T07:49:36Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T07:26:20Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bianxg/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jwlarocque/yolov8n-freeclimbs-detect-2
|
jwlarocque
| 2024-02-07T07:47:00Z | 0 | 1 | null |
[
"onnx",
"object-detection",
"license:agpl-3.0",
"region:us"
] |
object-detection
| 2024-02-04T07:00:37Z |
---
license: agpl-3.0
pipeline_tag: object-detection
---
This model is a version of Yolo v8 nano fine-tuned on the freeclimbs v2 dataset to detect climbing holds, particularly holds on home climbing and "spray" walls. (The dataset is not currently available but I plan to release it in the future.)
It expects a 2560x2560 image (if using the `ultralytics` library as shown below, it will handle this) and detects a single class - climbing holds.
### Usage
```python
from ultralytics import YOLO
model = YOLO("yolov8n-freeclimbs-detect-2.pt")
results = model(
["climbing-wall.jpg"],
imgsz=2560,
max_det=2000)
``````
### Performance
| | |
|-----------|-------|
| Precision | 0.961 |
| Recall | 0.942 |
| mAP50 | 0.988 |
| mAP50-95 | 0.889 |
(on freeclimbs v2 test set)
### License
Copyright (c) 2024 John LaRocque
See `LICENSE` for license (AGPL 3). Note that an earlier version of this repository erroneously included an MIT license - since this model was fine-tuned from a model licensed under the AGPL 3, which is incompatible with other licenses, I am not actually able to offer that license.
|
ergh0/Taxi-v3
|
ergh0
| 2024-02-07T07:45:40Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T07:45:38Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ergh0/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tranleanh/sddn
|
tranleanh
| 2024-02-07T07:41:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-22T11:41:02Z |
Soft Knowledge-based Distilled Dehazing Networks (SDDN)
This repo contains the pre-trained weights for SDDN.
|
omartariq612/quran-lora-whisper-medium-epoch-1
|
omartariq612
| 2024-02-07T07:40:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T07:39:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
magixn/Reinforce-Cartpole-v1
|
magixn
| 2024-02-07T07:27:12Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T07:27:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
nold/embryophagus-GGUF
|
nold
| 2024-02-07T07:25:48Z | 1 | 0 | null |
[
"gguf",
"merge",
"text-generation",
"license:unknown",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T20:20:47Z |
---
license: unknown
pipeline_tag: text-generation
tags:
- merge
---

Embryophagus is a 12.5B model with 32K context length.
It is born from various merging experimentations backed by a homemade testing suite. I was lazy/not cautious and lost the exact recipe! Oops.
However, I know its main DNA is from argilla/CapybaraHermes-2.5-Mistral-7B.
I decided to share it because of its good results on common sense and logical tests. Just do not expect AGI, the moon or else.
It DOES make mistakes. But less that many other RP oriented models that I tested and used...
And I love the 32K context length, allowing generous character and/or world description...
Its main downside, like many other models, is to "play" sometimes as the user. It might describe a scene and suddenly writes the name of your character, and states what you are doing, impersonating you.
=> This is annoying but easily fixed by configuring custom stopping strings.
In text-gen-web-ui, go in "parameters>generation>custom stopping string", and insert there strings such as:
- "\nYourCharacterName:"
- "\nPlayer:"
- "\nUser:"
In my humble opinion, and this is very subjective, its English prose is beautiful.
I like it so much that I will probably settle with this model for synthetic data generation.
Here are some examples:
"Your eyes fixate on the distant silhouette of "The Howling Cave" carved into the mountain's face. It appears as an enormous gaping wound, a jagged scar upon Mother Nature herself, where only darkness and death lurk within its depths. The villagers have spoken of horrors that dwell within; tales of ancient beasts, terrifying creatures and unspeakable horrors.
The path ahead is unclear, with twisted roots snaking upwards from the decaying forest floor like sinuous serpents reaching for their prey. Your senses heighten as your instincts take control - every rustling leaf, every crack of a branch echoes in harmony with your primal awareness."
[...]
"The sun's golden rays recede with every passing moment, yielding their place to an inky black sky littered with stars like diamonds scattered upon velvet cloth. The wind whispers cold promises against your skin as you push higher into the mountain's embrace, ever closer to the jaws of darkness that await within "the howling cave."
[...]
"With bated breath and a prayer on your lips, you slip stealthily into the cavern's dark embrace - skulking along its damp walls like a phantom of the night. The air within is cold and stagnant; thick with the scent of rot and decay that claws at your nostrils like sharp talons.
Your eyes adjust slowly to the darkness, illuminating just enough of the cavern's twisted depths for you to discern its horrors: skeletal remains littering the floor amidst pools of fetid water - their silent screams echoing in endless torment through this tomb-like lair.
As you inch further into the black abyss, your heartbeat pounds relentlessly against your ears - a drumbeat drowning out all other sound save for the hollow echoes of footsteps reverberating off the cavern walls. With every step, you sense an unnatural stillness creeping ever closer; like the breath of death itself breathing upon the back of your neck.
In this place where shadows dance and nightmares lurk, you tread with careful caution - a single misstep or errant whisper could spell certain doom for one who dares trespass within these ancient halls..."
For settings, I use usually text-gen-web-ui defaults
- temp 0.7
- top_p 0.9
- min_p 0
- top_k 20
- repetition_penalty 1.15
- etc.
[Support Me Here!](https://ko-fi.com/karkomagor)
[My Blog](https://aitravelnotes.blogspot.com/)
***
Vanilla Quantization by [nold](https://huggingface.co/nold), Model by [Karko](https://huggingface.co/Karko/embryophagus). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline - 4bc844478df79ecfd72815473b30ae09499e179d
|
chenhaodev/mistral-7b-ocn-v2
|
chenhaodev
| 2024-02-07T07:22:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T07:07:17Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-ocn-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-ocn-v2
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the oncc_medqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-ocn-v2), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.40|± |0.0492|
|professional_medicine| 0|none | 0|acc | 0.69|± |0.0465|
|college_medicine | 0|none | 0|acc | 0.53|± |0.0502|
|clinical_knowledge | 0|none | 0|acc | 0.59|± |0.0494|
|ocn |Yaml |none | 0|acc | 0.80|± |0.0402|
|aocnp |Yaml |none | 0|acc | 0.63|± |0.0485|
|
areegtarek/patientcommunication-8bit
|
areegtarek
| 2024-02-07T07:17:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-02-07T07:13:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rhplus0831/maid-yuzu-v5-mix-exl2-6.0bpw-rpcal
|
rhplus0831
| 2024-02-07T07:08:45Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:smelborp/MixtralOrochi8x7B",
"base_model:finetune:smelborp/MixtralOrochi8x7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T07:01:47Z |
---
base_model:
- smelborp/MixtralOrochi8x7B
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v5-mix
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was created because I was curious about whether the 8X7B model created randomly by the user would be merged with other existing 8x7b models.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* ../maid-yuzu-v5
* [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: ../maid-yuzu-v5
dtype: bfloat16
merge_method: slerp
parameters:
t:
- value: 0.5
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: smelborp/MixtralOrochi8x7B
- layer_range: [0, 32]
model:
model:
path: ../maid-yuzu-v5
```
|
huolongguo10/LLM_detect
|
huolongguo10
| 2024-02-07T07:06:11Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-05T13:19:11Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to detect text that was generated by LLMs.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** huolongguo10
- **Model type:** bert
- **Language(s) (NLP):** Chinese
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** bert-base-chinese
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("huolongguo10/LLM_detect")
model = AutoModelForMaskedLM.from_pretrained("huolongguo10/LLM_detect")
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** fp32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** P100
- **Hours used:** 4h
- **Cloud Provider:** kaggle
## Technical Specifications [optional]
### Model Architecture and Objective
bert
### Compute Infrastructure
[More Information Needed]
#### Hardware
P100
#### Software
transformers
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
psyferpunk/mine
|
psyferpunk
| 2024-02-07T07:05:01Z | 0 | 0 |
bertopic
|
[
"bertopic",
"aa",
"dataset:HuggingFaceM4/WebSight",
"license:mit",
"region:us"
] | null | 2024-02-07T07:04:05Z |
---
license: mit
datasets:
- HuggingFaceM4/WebSight
language:
- aa
metrics:
- accuracy
library_name: bertopic
---
|
humung/koalpaca-polyglot-12.8B-ia3-vlending-v0.1
|
humung
| 2024-02-07T06:59:21Z | 1 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:beomi/KoAlpaca-Polyglot-12.8B",
"base_model:adapter:beomi/KoAlpaca-Polyglot-12.8B",
"region:us"
] | null | 2024-02-07T06:59:19Z |
---
library_name: peft
base_model: beomi/KoAlpaca-Polyglot-12.8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
rushidesh/mistral_b_finance_finetuned_test
|
rushidesh
| 2024-02-07T06:43:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T06:43:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chanwit/flux-7b-v0.3
|
chanwit
| 2024-02-07T06:42:56Z | 9 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T17:54:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nry61/sdxl_businessWoman
|
nry61
| 2024-02-07T06:35:47Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-02-07T06:35:42Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a sks business woman hijab person
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2
|
yaneq
| 2024-02-07T06:35:13Z | 1 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-07T06:35:05Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2
<Gallery />
## Model description
These are yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_8gr59VrqueLphjEKA6kl_SDXL_LoRA_900_9d94_900_1e4_2/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 900
- learning_rate: 0.0001
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 6676.244818210602
|
EricValen/ppo-LunarLander-v2
|
EricValen
| 2024-02-07T06:18:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T06:18:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.77 +/- 22.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Artefact2/Midnight-Rose-70B-v2.0.3-GGUF
|
Artefact2
| 2024-02-07T06:12:24Z | 322 | 13 | null |
[
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-02-06T23:07:00Z |
---
license: llama2
language:
- en
---
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAsICAoIBwsKCQoNDAsNERwSEQ8PESIZGhQcKSQrKigkJyctMkA3LTA9MCcnOEw5PUNFSElIKzZPVU5GVEBHSEX/2wBDAQwNDREPESESEiFFLicuRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUX/wAARCAGAA70DASIAAhEBAxEB/8QAGwABAAMBAQEBAAAAAAAAAAAAAAECAwQFBgf/xAA+EAACAQMDAwICCAUDAwUAAwAAAQIDBBESITEFQVETYSJxBhQyQoGRobEjUmLB0RUz4RYk8UNTcpLwB4LC/8QAGgEBAQEBAQEBAAAAAAAAAAAAAAECAwQFBv/EACoRAQEAAgICAwABBQABBQAAAAABAhEDIRIxBEFRYQUTIjJxQhQjkaGx/9oADAMBAAIRAxEAPwD8jABUASQAwCewAgkAASiABOPwIxgsmMeN0ATLLDXsUx4IyFXcccbomM2vf2KqXkthPjkDWElIs2c+8X4ZpGontLnyRdtca1jlmc6c4b7o0pvE0b14v4Yy4e6ZNta3GVGvCdJ06q+PtJ9/mdVpYupJSVVKTeIxZEbGnVt4NrTNrlGFGvU6fXUaizBPO37onv0ute1bq2nSvK1OacZRer8C9K5cEqVdaqT7f4On61Gpd060nqjhwb8rsSlQua9XMHGLjsvDG/01+Ou3rztYp5de3xtJbyiv7opewpXE9UGpao5Ul3PLpXVSzq4i3KPeLOyNSE/4tBLnMocf+GT011Xn1aChrym2/stdn7nPg9K5nGrKGjh92t8nJVp53Wz/AHNyudjAlLKyuSYpOLzyVTaexpldvK35Ky4NNSqezKSWERVUWzsVRL2CIwSlsME9gKtAkARgEkFDBK2BbGQIIJABENAlMCYPDJbeSrZblAS8tFoJvYhfZLwkRVlFpnTbx+L4uO5zueX7nRb3MqE86YzWMYkSrJNup0pQnHK2e6fZno2r0x/E8qlcTnFQlLMVLKXg9CjL+GuftGa6TX0+4+jtJuhTqY2TcsmXX/o5WsHc9SsM1IVtTqU/vRUudPlf5Nfo7e0pWVOhByp1OHDlSflN/sfUTuKdWhpk1GEo7Sclg4+q7e4/KJwtn0WlVhSqRuFWcZzTzGUMZW3n/ByUaVW8zG3bcY74k8YPtPpLbQtVb3NGPp+pKUZVdOYxn2ylt5/M+YtKztepS9NwzLDjp4bWcPf8zcrHjHDRoN1f42tOONl2Po+uu1s3Sh0elOhTlTfrVHvKe3Gp+V4PKpVIVrmbk/iknnfO/Y+voWtsun2tStb16spQVRasJU1y3LPZEt7WST0j6O9FXT7OheXLUbi6y3lcLlROD6Q0HG+uJSwnjVh/JH1d9dU4dLpzor6y9cYxalhJ+77I+H+k3WJ39WnB1HKVNYm4tac90scok7u19PnbjGpnFUi1h+d0bVp7+5zzlhnWOWSuJYk0tvJlJvB0JzdGUnmNLdZa2b8fMwlLKwsGmGbjJJFNLybJuXL4Jik034BJtzyi5TSI0uU/ZHQliLk+4jKMKEm+WNmnJUTk9jPS8m0e7KuW7KyrgS2SRKe+WQ92ETF7GtDGvMjEtF7+wWLVJepN+DBrc1UdmZtbgpjYjBJPBWUYIJyFuAe5HBL2IxsBHJoto+5QvjYKzaGCd28JbnZbUEvjb3ISbUoW+HqqLfx4OypVjThqk/w8mdSoqeMLVJ8RRk6qoy1y/iXD4XaBlv00cI016939r7lH/JnGlVvnOrKaWNty1vbSrzdW4bfs+52fDTg1BJLOcLyTel1ty06saFX0E1pSy2+7IhWpq6qSa17fDjjJRQdeTcktuWzBzVNYgXSWr16s5LTN4iuy7nM25P8AsWSc3lv8SzxFbclZ9pjS0x1N4f7FW8vTHdj4prxHyS5KK0w/+3kojSqcucv2IlJyeWyBnARJUZyWitgK4yTjDLrgowIYwSAIwTjIGrwBD2eAl5IyMgWcuyKgAQCQERgEgCBgkAQSgEBCJAAAAAAAAAAAAASmQAL7P2fkhrHP5lcllIKhr8RnBfClvHZ+CrXnZgSpZWGQ1j5ENNEZYF41XH3Xg64XacXHSnlY37HCE8PKFm1lse5RuKUYRjGXzjIi8hCpSTeH3TPKjUzs+TTXJxUc7LhGPHvbp5biilKjJPlfud9re06UZLGlyzu9/wADjk8pJmTg0vh/Itm2N69OqpSjLM3zj8ytScqU41IYimuEZ0auFplx4ZtNRnRwu3AX36a0/wDu05UWoV4rLi3yjnllSaktL8PY0co0qUZQjiSxhryRVrOtiFeKjU+7NdyRawlDO65/czwapvVpns/JE4qS8M2xpknhkyeSUuz2aKsIIs+CFyWktgKplsZRQut0CISDW5LeGMZQEdipZ7FSiS0diIomS2Io1hlWWTysFXyVAABAtDnBUlAacJomJGchbMjS6xk1TMluy/AHRQ3mezSoqEYxUlKWrdxlmJ4lvLTUR69vUThHCec7vOzM1vF9N0erV9OVvThCTl/Ehr+40t/zXY9y2uqcunVKUY6ZyTi4PZxljjP7HkfRyNGpPVKm3UhunF/F815PXvqKivrdOnGSl/uxg/tRX3l7rwcb7eielrzqtF0p0acfTVxhy9RrQoJLnPc+YdK0p9duqtGlCVrGTVGK3i33a84+I6Op9EuOo1qFtTqU6dGSc3VqT+088RX5fLJ87Vq14V6UaadKVPMaeY8Jbd+O4npPT1bWtS6f1qr/AA0qkpJ0XOPGedj6Snd1Pq9NSuZVa8V8epY0/wD7B8ZCpddToSoOrqqWsHUpqSb1JcrJ7/TI1brpHqVptua0qUZbtds+/Io73dq+oVKKq03GG9SpP7Ckvu++T4+7gnUefhcnu1ukfX0Z06HwQjCOIuMI5y/wXC+fJ8lfzcqspacZ7FhXl3VNxjGprhJTztGWWsPuuxxVDpnOLTymn2aOWby2dI45Xakpy0adT05zjO2TPJeRm+TbktrW+OPcKTK4LRaSCxMpbJN7FZtPOnLSKvcrxlEXaY7iokI8kyWwGWCeBgLncrI/hXuyE9sBvLyyANYPMWZvktB4iyreQqCCeSCoYyyVsQtmXis5z+AFcBlm0kU92QTFZeWWeXtEqk2zeEARNKmopeXyzVyaemn9ru+0Sr2T3xFcyM251IaaUWoZw2u5GvSYpzq+lQeqcuZs1oUHbzdSssOPCfctZ2/o3FKUn8Tz+xW5bpVqkM6oOWcPcn8L/LS9q4jSlRnJKWXhGHr4Tlw328szuKym4/Dp0rCWTDLkxIXLto6stGnOF7dytOm6knjhbtvsRhLPctSc23GDwnyaZ9+yfwTcU8siVLQlKo9393uaSnCjLMPjqd2zCUnOTcm233ZCrTqOe3EfCK8EZIyVEtkckqOfZF0kuAKqPklFiGAbKhshsCSMkABkMAIAAAAAAAAAAAAAAAAgkAAAAAAAAAAAAAAAAAAng0U01hmYCtHt8vcq4Z4/IhSa+RdYfGzAzaBo8PaSw/JWUGvkBUvCo488GYA6cqSygYKWODWE88kXaZRUvmRCbpvD4L7IhpSQVfWpLSkscl7vFSnGfC8I5nFx+RKllYfHgaNkZtLRUWV7lmtK8x8lpZqNY2S3z4KSko4lHeL5TAlxUluZuLXJq8OGYbr9UQmn7oqWMuGWbyTOGN1wNnBYW/kIzfJeHBVl44wCIlyiy4Kvkt2CqvkglrcRg5tKKbb7IqJixJ7HbS6NeVIavT0r3ZnU6dcQeHBmdxrwy16cie5L3JnTlTeJLDKpmmQABAkJZaRacdMmucAQnuWTKkoDotY06txGFWsqMHn42spbFE9zPBZPHcml26KUlqR6drNKm8nj03iaPVsoylS14elS5xsStYvoul1ZRqUnFtbPddtz6CV3NZnnVq+0n95e/v7nzXTqsVVprxk9u8qQoQS0ynGclFKCy9zjZ29GN6cXWrypd0YxrwmoUfijKUtMksY3+eFz4PDt4QoqNepXcquMJSw8fqR1vqlS+UKMKnw5c54XMm+/yR5EZKrS3cUuMY/uWTpm5Tb36coTqp0a6p66eioo088vfvwfQWklC1p2kYtxoY+3HGr3PzyUfTllPGO8dj7HpHWPU6JUpVKWqvReYSSbc3/42Ys0sy29yvUdb6pN6cqWnEYqK7+D5Lqm1zPtsz6ONzFwpTpxeFNS0uOMex83e1Y3fUlCT9OMn8Un2Xd/l2EMr08avHSo+5yyeGdt9KE6050FP6vqehyeXjtk4Kn2tjrHCqylhlWyZQcX8Sw3vuUyVlaRGcIh8FWwJT3HLKjIVrBZkJc4KKeCNbYNpk8FSG8jIRHcnBGRkolbIgnJAAh8k5I7gCU8EMlbMBjG75JUXJ+xKjl5ZrHCTIulUlE0TSi2/wDyZ6sSTfBdJycZNfDlbEWEISuq0aecJ/obfFZUFOnNNzk0012RWrJUrmThHS1tgxq1fUilN7ReUQ9Np3HqNSg2nF9zllUw3h5fllJTctlsvCIS/E1pLUpOW7/MnKWyEnnbuTCpGmntmXbIGiopR1VXpXjuUnV1bRWmK7LuZyqSnLMnlkZBb+DIJw2W08NceX3CKJFlHHPJbaK2/MIACG8ENgTq8blW/JDf4AA9yAAiQAAAAAAAAAAAAAAAQSAAAAEEkEgAAAAAAAAAAAAAAAAAAAAAFlPs90Xi9vh3XhmQTwF20cVLjZ+DNxa5Lqaf2vzLOW2+68gYhbGjhneJRoC8ank1Tzl9jmLRk48AldBEYKUhComuN/cvGUU98kaUy6Sa+0mTRcGnGpvF7p+C84rLXYxlTxvH8gVtbQ1xlGKcpRTkku6XP6FJQa+LGnO69yKcks74eDSVOpUpQS5jnHuRfcZqWrbh+CGsccBxkktS+TJzjkrLJ4JWMF3HPBTOCoPGScojLYyVGtCkq1VRTxnufR2lC0sIKcsOXlnzVJuLyuTWVSUvtSb+bOeUteni1j3Z2+pfWLeWUmsHn3XVqcc6FlniNmcmZnHHXLnumleu69RyaXyMiAdnit3dpBBOQh3JzuR8ycgSERkAXLwai91lHVbWttdW8Gq06NVZUtUdUG8/mtsFbvptzZJSqQ1U3xUg8xf4k2umWU38PHyPoOkR02M1KeHLeMX3Z85BNyxz8j2LTqd1Yzi9UYLGyUNS/EmUbxunq2tGp9ejKnCTyn8KWcHsX/VLeVvQjCslXoVYyUZrD25PLtrmpf05XE3TguIThJwcpey/8Hq3/SKkrTT1pQeIfDfUd/Tf8s/88fI5ffbr9dPlet3NO5vncU6XpuW84x4cvJw2lldXNxRo0KT1Vntq427/ACNPqlf659Uj8VRy0aecv/B9VYdKo9Icpep6lx9mVTwvCNWyRiTdfGXdGpQu6tGoviozcZafKZ9X0e+p2fQqFOGiVxOqqmE8vnj8kef9IenTuLmV9bR1qazUiuc8ZRwdIpepV9WpT1UaPxTzsn7ZHubJuXT7CtfO7bkqThNbNKWrVvsfN9RoKNXVOaalvhLOEezO/q3lbmVZuOIU6ccRhtxhfkeP1OnWVROrOnSkk805VFqXtgmO2srNOGhVowVShcJ+hLOZQXxR8frg8uTae/J1TSbeMnJP7R1jjVWyvc6bOxuL+uqNtTdSb7Lhe7fYvf2qttMY4bh8E5ReU5eRtHI2QHwQUGyGySoRIyAgJIJIYVADGQi2duAn7FcjIF214K534K5JScmRUv4uxrTjhZwIU0uTdYe2CWtSMVFZy+Ck5rOy2NKy2a/Yh0k4RYLEwpKSUm+XwXrNP4Vslu2ZuelY8GLm5MCXJQ2W7Kby3YaxyTnCKyjGA34/MhyKt5AnPggAILcukvGWQltuTnbC4CrYS3e7IcssjtvsiHLwBL25K6vwQyQAbA7AIgEgAAQBIIJAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQBIAAAAAAAAAAAAAAAIJAAtr4xsyU1LaWz8mZKeGFWlBr3RUvF74/cmpoUIpRal3fkDNMvGp5MwB1KeSTnjLCZdTzx+Q0u1pRzxyTCvOnheOxKeSGs/Mi/8AF5y1WzzypbZI0aYRcnlPl+Ck25NOSy+7L606UoLv2YENafstNexWS1IODg0/JPfw/ARlwwXaz8yj2NI1hwWMlUY1tk06TOSLzl2MyckMrGWWwABkAJAJDBOAALKOEpST0vK2KnVGS+pTUVvKcU9lvhAjo6ZDXRqb8SXPyPTtq87eU4SxKjODUqcllSPM6XPRKSnOMaUuU5LKfZ4O+f2MqWpexitxw39CnTnGpbwdOPDim3h/ieh0zRXpKctktt++DDHq5jjVnbBs7Gt0+UYV/hqxjp0KWdC5x4yZt1NOvFx3PL109qlC2VtUUqSVLnOrB6Np1y2VlO3q0ldU6unVqqaZ4XZ+UfLVbmfpQpbbZe75KwqTynPd8Js4yX3t9WzismPi9KhcK26zK5jQpRjxogto/JHVO4U5Sys6t2eLKrL1ZPVnc66NSVSkpx3jjdpmpu+3j+Rhhh3g6frDjJpb/Fnd8HFa2cKnWYzuElaznqmtWN/BhO6dN/1Mx1es3qi5ya+HD3i/J0k128nu6fpNO96R062/7WVPW1hQpfaZ8JfW0aupSrcSk90ts+WUjXq06UKbynGWcNbomvcTlTb+2vEuDncrvp9Hi4OPw3n7ctHpcZJ+pWwktlBZz+ZwdVtqdC4gqEZqMoL7TzvlnrUeoemszor4Uvs7foc11UlUdXXJNSmtl4NTLLy7Z5ODh/t7w9tqfUHTsXQt4KlThFRSjy33k33Zxzip0pwSTUlyzVRgqWtceCs28tRi2dXzXjSjpbT5WxXgvUeaknzvyRg0yqQWaKhAAAAAFCpYhoCAC0I/EshExg2axSivcZxsiG1H3ZlpbZLMiYKU8NPCyUjD1E5SePYs5enBLgKtL/cltsZzqvSVlVcv8lcbbsaNq7zZKxEsuNiknuVEN5ZDZDeSMhkYAAAJE7IBjJOccfmyucjIVOfxZDeQAgAABBIAAAAQSAIJAAAAAAAAAAAAAAAAAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAkASpeSCAL4T4K4wETnPIVAzgYARdT8m0Pi7nKXjNx4Cyuj03jLf5FZrDTT3IjNaXlkSecNEVvO4hVhTjKGmUNs+SlT+LNYWH7GeMkqThNSGjaM5XuR8zZRdSjOWY7PsYx4ELENEFiDTKCSCQAAAHsdO6RbXlP1KnUIQXeGh5T8bnjnX0+r6dwoviexL66WPo/8Aprp9ShH0LipKTX2lJPH4Hh33S6llOayqlODw5Lt80ejSqToT1QbUvKJubxVaTjNZnvlp/aT5TRieUrXVfPs6+nRoVpVKFxVdL1EvTnjKUvdHNWp+nUcVuuzfg9/on0etuquMo3FWE4qMpRnTTj77pmrZIzJ2459Kjbwq+pcRlUWNCgsqS758GEINZ01En+R7nWeiXnS6cvq1xGpRlzGfwz38J8nJR6ZTdCKlNOot5Si/0Meck7r08fx8+W3xnpjYS13CjJ7rx3N7i7U7iacnJ6uX3Jq2dK0iq1OLS7PWZqzUo+p6j+J5aayZuWN7ejDg5uO+OLGvPOM5w+Gu5NOqoR1L7S8pFaleM4ypVYYnCWFjujm9enSbT3fsXW4tzmOW9tZXTitK+0+7NKbuKdOVSgp6U/i0v+xwwlOvVlKK+JJtex61rb06NPMVqqS5nLfHsi2zGOUwz58uvT0alKzuPTk4SnpW7hLaXzZ59SjOzdRqP8FvMZJ52Hryi3TcpY8Q2wXgqNahOMnKE0vhqqWWzluz29n9rH/wmqj6x9beYxzUS28GlCtTmnRqNU5NrEm9jyKV7Ws62qKi5Re+eGb3PUfrkFmhSjL+eK3NXD89OWPyJrdvf/66rmMYSkqbVR4w3DdfmcEm28R5eFuVhWqaNGtqC7Z2Ma1bE9K3Xf3N4zTjy8ks29Gdpc0paK1OdJZ+1OOxalYqtOE69dvbLhGWF+PsRDrVednC3unOtRhLVCL3eMY/FJlZ1s13KMnKLisaeTOVy9NcGHFryy7OoWcqs4uk6Tx42OK5s1b28JuqpVG8Sit1HxudcLrEWuE+7L64aVNJJz5TWxMblj07cnHxctuX3W1p9HKMFQqdWvqVrGqlJUk8zcXw/bJr1HpnQKNFStrmvJqTTa3z+aPOqap1HiT2eM5FSEqkYrXt7nWS3vb5ucmN08ypFRliOce5Q3uoqNdpNNJdjBm3IBAAMgkASti6ZRIsv0JViXLHBZLDyyHBN5TwkMuey2RGk60otIrLMn8TJUcEdwKtYQ4W5MppIycmyomU+yKcgBAABAkgAGwAAAAAAAAABBIAAgkAAQSABBIAAdgAAAAAAAAAAAAACCSCQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQBJBJAEgEATkAgCQAAyXT2KEoK0TJM8kqQF1lZwyE8JruEw1lBUrGn3KtYGdkidWVgIgDAKgAjSnQq1cenTnLfHwxb3AzRZbPK2ZtOzuKeddGpHHmL2MklnGUvmwOyj1Fwg41Kev+rVhnVa2l51OFSrb28vShzNvC+WfJxUKFGcZKVSLeNtMhT6hd2sfRhXn6cX9hv4V+Bm/ws/l6br0+n3Lo1vTr5p6HU0J+k/bPPg1X/Y0fXs5yp1IpP1ae2d/2PEU3W1S5ly8s77eUVbQgq8pZmnKDh8Mc8bmMsXbDOassK15d3txGpXrzqOXw6pvZHXGn9WpOcqm3MY55XY5brT6+mC+CO0UuyMo15Sl6VTLi46XnlGbN+np4M/CdvSoXEKuiE5xSb3bbzj5G9KpSdR0JcptJvwePOjRhhRnJ+yWCdU1JST+NvhckuO3bHnywvbe4tY3nUm1OUYxhjMe7X/k47uxVBLDcm2/iOyjWjazl67xKMdl5bOerXd5JRgm23sjWO5/xw5rhZbf9qmycI2kkk/Ucvib8GquZQlFY2WU0+5nTsbuNRKNCcs8qKz+x2f6VdzlGPoNN9nJIWRMOXKYzXWnG6sksNtJkQrOUXh8vg9X/p/qUqcnHp2N9pKqn+hw3PT7y1/37arTS7uG35jprzt+3HWt51YN04Sm48qKzsZVqNa2loqRlB4Tw+2T7f6J2k4UpVWt626lqWyH0g+j0ZwrXt1f0KEP6o/pnuyy/Tz5zvcfB+rLeOce56tDo0b/AKUriym5XFL/AHaLeX8zy6/oxrr0ZSnFLdtcs6bfqlxY11VtVCmvCXK8M3Z+OW/0hUWmlDDbjHTj9ysf4Uu8WnsyLq6hXqzqUqcoSb1Zb/8A2TGncOOVUWpZz8ieLtjy99uyr6U4pwjFyxu12M9TTWXx5M4TUt05Y9zacJ1KcpRjGSgsteSenW25dxvTi01q7+RVqRoxbknjwcNJxnFJbS+ZSpqhJxnybjx27Z1Ja5yljGXnBQ9u16RbTt1VvLl0nJZjCOMnnXdrToSbpVo1Ids7S/ISypZY5CSAVAlIJF0t9hVkMYwG+yGSEtzKrJPGGXhEhbETqqK25DROWGYyn4Kyk5MgrNqGwSAygAgCQAAAAAAAAAAAAAAAAQSAAAAAAAAAAAAAYAAAAAO4AAAAAABBIAAAAAAAAAAAQSAAAAAAAQSAAAAEAkAAAAIJAAAAAAA7gAAAACJIGQJBGQBZPBOrJXIKqzIGRkgnIILLgAd3S7+dhdwmpzVJv+JGL+0vkcS4BfaS6r7a8rUbmhFW1aPqT+KEs91+x4Ver9Yh6dw9STf3U8fI8qlXnRknFtY8HZbxldRm4RlpjzlpJGJjpu5bZ3vTJ2jTypKSUklJN4/A48N+/wAz0ouWZQhFSWM6orOPmc2r06mqUU87P3NMsfSmpKOluT4S3yei7OFCk6Uqn/dpapxctopfd92dnSalJVXVw9cHiC43857HLfZubpyo0oJy7wb/AHZm5d6dMeO3HyYSnUlFS1aZd2iyam/i3ku5VqtCa9ZOce+Gn+xte21GOmdrU1U2lnPKY6rUuWP0pCEpT040pd2bwUKU04NPHcxtaHqt0oL1Jya4f2T3bTpDofw4uVOct5SgsP8AAldMc79MrTpVW+k/Xjopy8r4n/g9q06PRtqWj00t+VHL/Urbwla3EI1rmUo1HiE2kt/5ZL9mexO7hRhvJbENd7qlKyi4Ll48siFKnOo4/HSqw7Rlh48+6OVfSGhCrplLZrZnmXfXIyrr0niaeYPw/HyZNG4+lV7O2TVw9cEvtJbr5r/BSv1W2UNUZxqJ9vK9j42r9Lq9ZOFWhGnJZxUi+PwPIl1CrKtKaxFT3cY8MsxrNzj627uLKrHVGjRU4yym4bP5iMqHULadKh6dGbW8Gk0j493lTU3q2fZiF7UpVY1IvEo/qa8WfOI6l0u56fWkq9LCb2lH7LOetPEtGPsrT8z6H/XFVpqlcx9Sk+77Hj9StFRqetReqjPvn7LLP5c8pr046TxJe+x3xs4uVOVTdSfCXY4qEVOTUtlzlHuyrUaNG1+JPdJruZztnUej43Hjlu5fSfRt6clGnCGnG7bN6cI+luksS+F45yYYjLVUjJJPbBySutGmMXF6ZasLhHLx2+h/cxwu7FqnS4urKVNum48vsjguY06NwnKs67TTcdOMrxk3r3FScnVqSbXhcIw6i6VWqq1BqUZxWV3TXlHXHf3Xg57x/wDhiyu76td3Eq02ot8RisJLwc7bcsvn3PpOn9ItKljC4dSVOrBZm5uOPyZ49zJ16k5t68cNrlG5Z9PLZftyYIGQVlaPJZLBRFs4I1E8shtRKueODNvLBtpKo2ZtkAIEkAIAAAAAIBIAgkAAAAAAAAAAAAAAAdgAAAAAEEgCCQAIJAAAAAAAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAD1ra2pdVpSw1TuoLfxNeTz7i2q21V06sXGS/UWtzO0uYVocxfHld0fa1LW2v7aMpQVSnNaovwcMs7x3+H0+Hhx+VhddZT/7fCYB7l90CdJOdtJzj/JLn8+54s4OMmpJprs+x1xzmXp4+Xgz4brOKgEGnBIAAEEgAAAAAAAACCSCQAAAAABkkgFFsklSQLLYkqb0Iww3N4fYCPSzwjW3q1ZU5UacZNJ5aRjWn2T29ha1pUK8ZweH+5Lv6ax8fKeXp6fSqV9Rv6dS3pTW+JNrC0vk6er2jddSlHLly0/BvadepRi00qdThNrKTOfqPX6lzb07abhVjSlmNTCU/GNuUcJllcu49vLw8WOHlhlty0Zxoxnp2WcFnWbpPTp+TXY417bolSlFYxlG7Ns453GaiJfHJ5Wn5HZZ0LapRrSuZyU4r+HFczfg48LZt8EqeiEquMT+zD2z3/I1pyt17fR9IUdLnphCo2klBbG3/UUKN1OnWWuOtxjOPZHzdO+nSxoeNKwvmZUbarXp1ZQi5Rhhya3wPH9Z8/qPa6h1iNeTj917Sx+jOOp1SvUoaZVG2u/k89RyiJJxafZ7GtMXKtJVpvuVU21u+N0Q1sQXSbTKeubcuX3Ixp+RXuWfGewEMqyYc6Xw+GSlmTXcIQl918PY1pV9MXCe9OW0omMlpZXO7C7TJSozcVL8fKNbW7lSnJOMajmtPxdjNaZJas7ePBVpRr/DNNJ7SM3trG3G7jbtgtGLk1jZZxlkqMV8Uk5R5xF8kSm6q1YUYriK7GXeSfbqq0M0ZYq05pLhHPbKhC3uKVxF65x+ColnDMnVlGO0ouPginP1HGKjl+4ks9ryXDPWppnVq1VTjScs04vKS4yRB1JpxT27tnXGzdWhUnConOPMGt2jlp1VFNS2Nzt5spYh289ajBa2+FFb/kVqUqlJL1KcoZ41RaPe6era2i6larD1ZLjPC8Gl/wBUhG3/AIUozcfsxnHMc/iTd30amu3zSeEQ3ktNucnLbLedipWUMAgIAAAQTggACQAAIAkAgCQAAAwAAAAAAAAAAAAAEASQCQAAAAEASAAAAAAEASAAABAEgBAQSAAAAAAAAAAAAAAAAAAAAA7rHqNW1klCo4pfkcIJcZZqunHyZceXli+wt+r07hKFZKEn3+6zO96bRuMqccPtJco+Xp15U1jmPhnr2PWVGKpXDen7sn2PNlxXHvF9fj+ZhzTw5XlXVB29xOk2paXyu5idPUHm/rPOfi2ZzHpx7j5HJJM7IAArmEEkASAAAAAAAAQCQAIAEgG9GhGqm/VjFrs0xbprHG5XUYA0q05Uqjg8PG+U9mZhLLLqhJBJUSjRRk2oxTb8IyPVtqVH0Y1EnKUlv7AcqsKrhqm4w9nyVdrOljVFxb2UpbJHt29rVruMktSj9lv+5PV6FGjYem2pV9Skn3f/AAZ21p5denWuZxX8DVjGuntr/wCTmlZ1qf8AuR0rOM5IVeUZ6c5S/Q1neynDTJJ+7W5UVhFwW72Jz77Er4o5X6kKD3I3NjrqC2WWRVqKpF4+7L+xSpTabfKKU3z7liZW/bWKykff/Qayj9Qua0op+pU07+Ev+T4SjH4c9lg/UvozSVv0O2jjDlHW/m9yZelw9vi/pJ0V9O6jUlRg/q8lrWPu5f8An9zwprVBrufrl5bRr3tspRUlOFSMk+6wv+D84690uPTuo1adu3Kh9pd9H9Lft/dDGmWP3Hkp5S90AvsxzyngdzTmpjlhTTjun80XxhGChKUlGCzJvGF3CxVz35NITzVT8o+v6Z020o0fR/hOpBJznKmpOUnzz2R4nW+l/UriNxRUVRlPS0vuv/DMTOW6dsuHLHHbzqhlgvN5wVzybcVqTxNe5p6Kqa5Y054yYJ4aZ0xnKpQnRjnLecLuRfpzxeiqsvZPsdXw0J7Ykn2NYdOVKi6tZ5fZHPWcEt9/GCXtvHeMUrRTepQUURu2uOMbFYz1YUpPBopJJ7rC7hZZR3FWk04Taku5i608TWVibzJY5I3nLjd9kddCliDXpucnznsX0xld1SlXpKCc3h+EUq1ZXM1CnGT32S5Za5t5acqi17or065Vpd06k1mKe+/AZddL6P31aClohBviM5YZV9B6jFtTtajSWVKOGj6J/SS3p7Wtu5vtOWx4/UfpJe3EJUlUjTTTi1TWP1MS5VuzCfbxKkJUqkoSWJReGigBtzD6LpP0bVzCNa8nKEJLKpx2bXuz55NJptZXg77nrV5cRcPUdOm/uw22+ZjOZXrF34bx43efa/XHZ07z6vY0oxp0lpc023KXfc8skg1Jqac88vLK1IIJKwAAAQSABZItSpTq1I06cXKcuEj6Wx6bRsKaqV9Mq38z4j8jjy804/8Ar2fF+Jn8i9dT9ePbdIuLhapL0oeZ8/kdFXpNCjB6pzlLzsj0rjqMVlQWfdnmVq8qrbkzljnyZXd6fRy+P8Xhx1P8q8uvTjSquMZZXlmZ03TXwrCzu8nMerH0+NySTKyAAK5gAAAACASAIBJAEgACCQAAAAAAAAAAAAgkAACCQAAAAAAAAAAAAAAAAAAAAAAAAAAAtJ5Udt8c+SpMpOTWXnCwQFAAEAABBPcAAAQBIIJAEEgAAAIJTaeVsAFS5uWMvOCAEm3hbsHsR223S7y7gp0becoPiXCOI9XonVJ2VyqUqmmhU2ab2i/IpHXS+itacE6txCE391LVg561jc9JqxlUxWoLvF7fkfSTuPTpNZcm+HHk8246vWSUatOnUqU/v1I//wCeDEtrVkcK+kc4wahQSfbfg8ytXrVpylOblOpu37CMKbus1Yy9JSzKNPnHselcztaydza2UbShD4U5TcnI16T28eMdvASwzWb1ZcFz47FUnGag+TSLKrLxk1VRfJ+5CSS8FJRy9iaamdizy4VJdlF/4Oen4N5NulNY8ZMIrcqO+3aqU1Rin6lSpFfM/U7OcaVCMcpRhHGW+Ej826JSVTqkHJYjCWce6PrL61r3kYQVR/V1vOmvvPtn2OeV7duPHca1+uSvr+Tt6zpWVOPpurHZz7vS+y2W/scF/wBZsKfpQi4yUW04pfda3X7HJTsal51GlYzqTp0tLlPS8ZLV+h29LTbQg3Wg260+0Uvfvnt8xj433WspljZ4zb56sqc69SVGDhScm4xb4Ri/9z8DuuoaaNu8YcqK/dnC/wDdXyOk9OGU7Gtj1vo/Z069d1ajeIPjHs9/zPKZ7n0YqabitT/nh+3/AJM5el4/9nppfVL6cZQ1JpS1QnznycH0muIRs3BPPq1FpzzsejTsqFO/uK9aq5SlFOFNbKK43f8A+7nyHWL/AOv9QlKH+1D4Ka9vJyxm69PJnrHTmluyi5Zd92Ujwd3jPJ0W9WVKopw3kvJzG9tLTXpvjEl3JSL3NzXuJ4qSb8LGDN0sQblJLwjtvq6rVW4Rj8L0ua5kzma1NQ4xvkT0uu9MoY4xv5JcJVNSysRWRKTjLGEmu6RpF04xc98/y55GzxYUtpp5x7o6lNasvdfM5k22WUU/YExtdruNFBY+KXZS7Hn1KlJ0nCENMnLMnnZrwb+j6tFJTerPjOCJdNqRTfqQx5JuN/28r6jjUnH7La+TIe7BBXJIAAgkACCQO4EEkEgAQSAOm3tXcRbVSnBJ4eqW/wCRyk5aeVyK1jZL3Nvet3QsYfwW6lVreo1+xnVuJ1HmUm/meVC4qR+9leGX+uSx9hZOM4pvf2+h/wCs3jMJ1Px2uRnKSSbbOOVxUl3wvYyy2bmLhl8j8Xqz9Sba47FADby27u6AAIAAB3AAEEgAAAABBIAgkgCR3AAdwAAAAAAACCSAJACAhEgAAAAAAAAAAMN8Bxa5QAAAAAAAAAAAAAAAAAAAAB3AgEgB3IAA1pU41G4uahL7urh/j2IqUalJ/wASEo/NGZ6VGtOVCOW2sYeTOVsd+PDHPq9V5wOutQjLeC0y8dmcjTTaawyy7Yz47hewAFcwEEgDqt683BR9SS0cYfBylqb0zTA9FaKyca0NeeJY+JHHcWsqDeU9Plo3pqdScYwxl932Nr2jOMo0XVdWb3aS7BXNb3tSgkozkkvfJpVup1U5VHmXnyY1abppZjhPgmjRndVo06e85PCQGeuWpv7OecHRK4qVKVOhFvTHiMT0q3SKfS4Sq13G4ko7ReyT/uU6P0+rVrfW6jcNPxQa2y/PyJvpdX082DdOTlFtN84fJRwcpuSZ6/UrulWpSpXFnTp3kJJxr0tlUX9SPMUWlkSrMd1WTlhJ/mW1be5E+xGPcuy4/iKiaWX3RSC2+ZrUTdJ/0mUN5RRUd9vUlaPXBfFnZrk+66FVqXfT4Vq0NEnlY+TPj7Czq391St6GNc3jVLiK8v2P0mVjR6fThRt68KsIpJY5OWWnpx36jgrWynNSS0tctdznvKSo2Vb044nJaYpd5S2/ueqo5PNv6VxfSjStkoU9813wuz0ru+3jkw3u60+O6lDVSo1IL+FShGjnzLGp/ujymm5qS4TPqPpFb07PptvbUliNOtJLPL+HOX+Z41naevZ9Qqf+zTjJfPUv8M6y9PPlj/k4G9/wPT6DPRdtrlL90zypHf0WWL7HlI1l6Zw/2jW4ur67rVqNKtJRqt01FPGVn7L9meLVoulc6Gmmucnv9RoVLScKlulpk1HblP5nhy+KvUk3nG2fJjFrkVltEqtkkWlu/kV7vPJ0ckLdkxfxNER5ZGcSYHXBNxi1xkvCKi3N/FJ/oZKo4qON4vlG1OLlUSytL3yzNdsNIlSU5amVo04RruFR8r4X7m22Gnx7nPVp7488MjeUk7aV4xlP7Ol9zJU5PYpUrtzSayorDfk1jNY+Fp+w9M7mVbxpujlRllN8ozvLjTQdNN5l29iYV2lhJfizGs3VrJvDWOcCRrLLWOo4wejoinmnDCKfV6dVvUnGXlGnm04AXnB05OMuUVCAAAAAAAAHYAAQCQBAJAAgkACCSAJAAAAAAAAAAAAAAAAA7AAAAAIJAAAAAAAAAAAACCQAAAAAAAAN7ZJywen/AKa61LKR48JuMk0fUdKvIVKSjLkzldR145Mrqvmq9vOhNxkvxMj7C/sad1BtLc8Gp0mpFvHAmcrWfDlPTzQdU7CrHtkylb1I8xNbcrjZ7ZAlxa5WCAyAAAAAAAAAAACCQIJBAEgACDvsk6lOUUm3HsjhNKNadvUVSlJxku5MpudOvFnMM9309uhY68Sq/CuyK9R6bGdLXSWKkVx/Mjnj1l1WvXjuu8eDpj1e304lJ/8A1Z5LOSXb7eOfxOTC4b9/rwCTa5lSnXlKjlRe+Gu5geyXc2+Dlj45Wb2kABkAAHd0+hO4rQUnKNLO8kj6+3sKNtTlFR1uf2pS5Z8PO5rTwnUkkuFHZI6J9VvqtH053VRx4xkzZa3LI9Dq19Tq1PSt4pRWzk1+xXpFxRtHWr1k8QSxhZ3fY86pcepGmpPeENJm6spQUMvSnnBrX0m+9vT6x1NXz/h/YznL7nJZ9WurJaack44xpkspHK+MIiMdUsDX0m7vbpld3FxFRqVG4x7GqS0ZcsPwc0V6ba5LpubwkvxY06Y5Se12k4Pcyy1LyWa4/sVTzPxkrGWW63nLNlLDWe5zUf8AcT8Ez+yxQ++RZdvU6ffVbG6VSk09sSi+JI+v6X1WHUtaptxlBrVHhr/KPhabxJN+Dt6bdVrSvKpQlpb5TWcnPLHb0YZ6mn6PJ04081XFR/qMaVw7madCOaEdnOSaz/8AHz8+Dwul3H1lxlVmpVW92/u/I+mppKCS4RzdrJI+S+l7/i0If/Kf6JHP0O3X/T/VKz5qJpfhH/k6fpHRle9ft7Wny6cU34Tbbf5GVOtCy6Nf2ieJQqVIY/b9De+tOOv8rXymc5Z29Ei5dQj4SZwy2TS8s9Loz9C4lLS6k3T2gvL8vsdL6cMf9np9UqRp2snLlcfPsfNxp6Y7nbf3NW5uGnKLhB/d4ycksfNLuxjNQ5Mt1i2k9tzN/aZfZtvGxVLKyaYFsiho9kUUQNqOJrSzbKhScZ5ymsJdzmpvTKL9zsm1mlmCmlP7LeM5CyqSTnST0vHy4MG2mlnZdj2KU6VtSU6cVOL2cXvjfdM825xKtOcIKMW9kuxFrllvItjG/DK4xLJZ8FZQ5vHJNKbw3jOhZIxjghylGMktlJYeO5DbqpX0I4UqbXuhO/jhqFPL8vY4QDbStXdaSbSWPBkSAIBpCn6jwpQT/qeDafT7qENboylD+aPxL9CbjUwys3I5QSQVhIIJAAgkAAQBIAAAAAAAAAAAACCQAAAAAAAAAAASAA2jQlJZM505Q5Q2141UEEhk7gAAAAAAAglEEgQiQAAAAAAAAABvbXMqE008GADUurt9HbdS1pKTPQpVYVOcM+Spza4Z2UbydNrc4ZYfj6fB8iTrJ9TG1pVVwhLpNOS4PLtOqbrLPaoXsZpbnC+WL6OOPDyvJueix3xE8e46VKm3pTPspSU+GYVKEZ8oTms9mf8ATcM5vF8LOjKD3TMz6656XGom0jxLrpkqbeEenDlxyfI5/g8nE8wFp05QeGsFDq8NmvaQAEAABBIAAAAAQSAAAAgkgAT3AAnDxlp48kHRRvJUaFWi4xnTqLGJdn5RzhQABAlLYgsk2tkBUvTWWxGDlLHHzNoqKTS3aCtKdtracpKK8vuJW3ozzq1EuprllvH9i1XGhLXl9sdyOvjjrpjOLbyRHJpKLaz2fgzitVXR+pYxljo3XHBONTz3N1FLsUnFJ58lY0pKnKME3wzKDcZPw1g66k4zglHZ+DKnSdaEoR+3FOaXlLn9N/wI3rvpZPVJfI67XaLb5ZxQfjujqpPS0vYSLt6lnX9C4pVE8JSWfddz76k8xWD87s6NS6rQo0knObws8fj7H6V0ey6b062hG4r1L6olvKq3oXtGPj55Zyz078e9Pjb29p0OsX13OSctXo00ucRSTPClUne3U69ODwvjqRcvtKPdn7XTuOjzWHb2iT7OjD/BzXH0W+jfUVOSsaEJTWJStpem3+C2/Qk0ZZX1p+Gentqnw98eS8K84KooNpz2bXjwfoXW/wD+M3GnKr0m6nVa39KtjV+DXJ8NW6VdW7lGVKTcXh45T+R1mUrhcbO44m3/AFY+RSbqTSWhpfIvJSi8Sck/D2KvX7mmEaGl4KtaYk4l74KvHcCre3uyeEThZzyWUd1n8giqjybtqdJp+DKLzMsnp1r2wFUpNxksNpPsdCxqWrjO5hFaaiws43waYaWWsZBouqGjLX6dznXB1JyklqeUuERKnFrL/Qm2/C62wlhJEao+nKLhlvGJeCJcZLLDSK5sMA0qxxhozIAAAg1o3FW3lqpVJQ+TMyBZtqW43cayxVbkn8T3ZnwE2nnwb1FTq09cWlNcx8k9Na8u/tgQSQVzCSCQAAAdwAAAAAAAAAAAAAAAAAAAAAAADptaDqzWxjTg5ySR9D0uwxhtGc8tR34eO55aaUOnfw02jkvrHTF7H1lKglBLBx31snB7HjnLfJ9vP4mP9t8JOGiTRU7uoUfTqPY4T2y7j4GePjloABWAAAAAAAAEEgAAAAAAAAAAABMXhm8Xk58mkJErpjW6bT2Z1UL2dJrLyjkTySjNkvt6cc7jdx71t1JSwmz06V1GS5PkYtp5Wx00b2dPGXlHDPil9PpcHzbj1k+r1Rktjnr0Y1E8o82h1FSxud8LiNRcnnuFxfVw5uPlmq8i96enlpHi1reVNvY+wnBSRw3NnGaeUdePm11Xg+V/T5n/AJYPlwdt1ZSpNtLY4mmmeyWX0/PcnHlx3WQACuYAQBJBJAEggkAQSAAAAAAAAAAAKBvCpDjgwAGsuchTafG5RS2w+C7aWNK3A2eyXkjbGc7+BCWt4k9yJSUJYW7Mum/tqlsvBSLxV1Lgt6inBpLc3pWqrQSjnX8h67rVly6xTGOTGrhPfdnp07JpfGm/ZGrs6dShKLppRfEl9pPyYvLjHfH4XJlHk0qXqRypJPw2Yy10akZRbjKLymux7Vt0qhLm4lJ+I7HmX8dNZRis42LjnjldRjl+NycOMyz+1FLXNScIptcx2/Q1jnWjCDzFNco6YfFho6PP7fRfRyh/uVsb/YT/AH/sfSRk8bs8rocNNhTf82ZfqeskcMu69eNsxkSWjOcHmE3F+zKpE4Ibrvt+tXNDCqPXH3OTrFK16k3d0Uqdyl8S/n/5M2UlHYJXj1bGhcxxVpRn81weJf8AQHSTnat//CT5+TPq5QxLKM6sFKOcGpbGbJX57nDcZJxktmn2Ikso+g6z0tVk61FJVY8/1I+ccnF4e39jrLtwyx0pKU08YJSe7b3f6EuT75RV7rZlZSnh7LYvNfFkotl7I1dOpGhCq4tQqZ0vzgCZrRPbfPJMaib42M1LZEbphZlZ6byWlORkqmdnsRLhPsQ8R92TTVzt9M6qwn4Mk2joccowaw2iuY5tpp8FcbZ7AEAAAAABBIIAkAAQSCAJAAAAAAAAAAAAAAAAAAAAAAAAJSy8EHb0+2daqnjYW67axlyuo7+mdPziTWWfUWlsoJbGNjbKEVsenFKKwfP5eTdfo/h/GnHju+0pYRhcxTgzZs4ryuowe5yx9vdn1i+W6xBang8Q9Tqlxrm0jyz6WHp+V+TZeS6AAbeYAAAAAAAABBIAAAAAAAAAAAMBbAAawmbJ5OTODWEyWO2ObcFU8ljLtKnLW6Z0UL2dN7vKOchksl9tY55Y9x71vfxmludinGaPlVJxeYvDOy36hKGFJ7eThnw/cfS4P6hZ1m9evbRqJrGTxL3p7g3KKPXo3kZpbm04Rqx7HPHLLjvb18vDxfKx3Pb4+UXF4ZB6/ULDTmUUeQ04vDPbhlMpuPznPwZcOXjQAGnnAABBIAEEgAAAAAGQAGRkAME5Yy/IEYfhmsIbZZEYyaz2L87ZKIai+wUUnnGxrCKjzuVnNLIFJLfKIUHKSS3b7BT8l4U5zadNPbv4Ism7p6VDpde3SncU8Q5a7myuNVeHp0u+0UuTlh1CtJaas20lg6PrdOUNThmXZI898r7fXw/tSf8At3X/AH26Zzm4yqcSjzHOcovTuVWjBR/I831KdKLcZSlOS3S4RFvJwg5RzqeyTJcOnXHnvlJ/8ui3lKFy05KKXOe5yXiceoOLzjlfiatxuLiEZNJJfFLyTf01WnCpTqRUo/Dg1jdZOPLj58VmP1emdnGlTu8XGPTaz57HbKwaUqtrmcEsum/tL/J56qUq84Qk4UnB71Hnc9SldWtom1XlXl/LTjoX5suXlvcceKcdnhlrX79vpukx02FFPZ6FsejE+DuOs3dw0o1PSpriFLbH48nVafSO9oYVRqvHxPn8xrJLMfqvtCTx7P6RWlxhVG6M/E+PzPWjOM4KUWmnw0yM6GZuW5M56UzmhUc5t9glbuOUZyhszWPBEsYKw8m5WM5Pk+r26p1vVivhns17n1XUZaGsdzyOqUNdo13xkuN1Uym4+cxts9iNL7YIzjfsyU3J4TwdnndPTenVepX9O2g8at5y/lj3Z39bvKVarC1toqNvafBD3Oi2qLpHQpVI/DdXvwxfeMP/AN/Y8Oo0lhEjV6j07mwtlZxr2tZzejXOLXGyyl8tziqW86cVKUWoyWU+zPT6I7a6oVKF1LTj7MnLGDosZRha1LS4jFyoycGn3XYzvTXjL2+dk2lhoqt2e5/079YuJK3uKVOEd6nqv7C8+6K9R+jdz062VzGpC5tX/wCtS4XzXY15Ri415UUtJz1Us7Gik4to9nolWyoRrK8lKM54xhbYRb6STb50H0HXb+nojbW8MJreUuceDwsrwiQs0oC6a8RGr+lBFAXUlh5Sz4wRr/pQVUFtXsiNXsEQCc+wyBAAyAAAAAAAAAAAAAAAAAAAAAAAABMVqaR9N0i20xWx4lhQdWqnjbJ9hY0VTgjz8+epp9T+n/HvJl5X076MdEUaajLWkjGrdxgt2eKdv0PWLerVUIttnz3U71JNZLX/AFVJNJnzdzdSrSe+x6OLit7r5vzPlzGeMZVajqTbZQA9r87bu7AAEAAAAAAAAAQiQAAAAAAAAAAAAAAE8AgDSE8G8ZpnKWUsEsdMc9OoGMahopJkdZltJOAtySNLU6kqbymelbXyaSlyeYhwZyxmXt34ubLiu496U41Y45PFv7LS3OC2L0rmUHu9jq9aNWPk5TG8d3Hsz5MPk46y9vAawwdl3bqLco8HGemXcfF5MLhdUABXMAAAAAB3AAAAAAABrCoow0+lTbf3pLcxLAaeo3/wUb3CD5AvDGMyf4CVRPZLYyNKdPVv2KKnoWN0lD0pJZ7M58R8FHFRlmLwZs3NOnHncMtx11nSm2l8Ml+pnGTSxgp9pZfJeGeM4RNad7l5XaVJZ+LKXsXip3NRRisJcJcJFcI1g5Z9OnnEtm/JK1j/ACq2nrjT+zH7z7ltFHTpjOrlrdtLb5FKsdFZxfKSM28/MsnTlyclmVi87ejjZzfzwZfV47/E1gl1HulnZFIttP8APDN6cLWkacoPMZ6l4ZrGq1JRnB5fjfJhFNQk4y+Jb48llJ61qjJPs4saWZ2OlTTeFLfwzptb+5spZoVZRXePKf4HF6qeMrV7lZ1XTaxnHhksdZy/r6il9Io1oabiHpz/AJov4X/g9K1q6oLG65yfEKopLPC8nZ0/qdSxrLEm6Ofihz+Ri4/jp5SvuVLYhvY5aVzGpCM4tOMllNdy06uEYNOG9SqXEY9uTivFmLR0a/Vuaku0Fg5byWwhXylRaK04PjUzqsLVVriKk1GmvinJ8RiuWYVVquaku2p/ibur6dt6MOZvNRr9Ednm+1+oXf1y5c4rFNfDTi+0UcM3l7GyWPc53sypWlGp6c09Kks7xlwzr+vSneOs4xip4TjHhHAmXbygbe9RrTuK9KnSelYcZ4+9HG8T0ruNTp3QL71KmKV2/ToUm/6ufyR5XQYyueo2sYrFOjL1Kkn+x1fTCvK5vKMozUqEIaUl92XfP6GNdum+tvHo2Oq1ndOrTlGHMVLdfgcs6qa+FfidFCo3JOKTqLhNZz5XyMLl0al1L6tBwp52i3nH/Bpz+mFapKrPVN5fBnjJ6lpZUry2qw+zVhvGa4+TPOqU5UarhJfFEbNfamCCxD4CIGCfAwBAJZAAAAAAAAAAAAAAAAAAAAAAAAAAABQtCLnJJEJZZ2W1NRkmyW6jfHhc7p63S7ZU4ps9r1404nhwvI0YcnLcdTcsqLPFcMs7t+i4ufi+Nx6j2LrqagnueLc9TlNvDOCrXlUe7MXud8OKR835Hz887qNKlWVR7szAO8mnzcsrld0AAZAAAAAAAAAABCJAAAAAAAAAAAAAAAAAAgEgBklSaIAXbaFXBqppnIFJomm5nY7kSckarRrGsnyTTrM5WojJxezKqafckje/xq561hnFWhiWVwdBWUcoTpM75ztyAtOGllTby2aAAEAAAAIAkAACASAJGGzejZXFdpU6M5fJBZLfTFIM6qvTrqj/ALlCcfmjncJJ7rHzG1uNntTDLwk488BIlrJWWsWms5Kt/EUUGW07+4VpBx1fE8LyXjKLeDLTkmK0yTayiabnJXSkksvgmlV01E4vDXcxnNz9l2Rva2dSq1KU4UaXepVeEv8AJNfrV5Lv/FWacqkpPcwSbqvdYNK9WFJ1KdOoqjUmlOK2l7nGpNZNRyttu63y5yko/C15KrZaZbPyZqo203z5LxbccdvlkIvFTbzGO0e/gZWlYe2e3gr8OHhJPykTCSSw3KfsuComMsrxh9yK32ljZYJUKjmpcCUWsObz22CqwlKDynguqkcrPwP24M5PDWMFGsv3Isunq2vU7mxxGLUqec6Xun8j1n1ulXhiDcaj+7I+Wp1ZUnwmvD4Zo5QqNKK05+63/cxcduuPI+toL0qD1fae7Z53UbpU4ZW8uyPLh1C4to6HNyivuyMaly7hqb5XKJMVyzRL4Fl89v8AJSGW8sicnJ7vLZeL22R0cUyelGEuTfQ5e78I2h027qLVG3mo/wA0lpX6g1tx6XtsaQp1K9WMKNOU54woxWWz2qH0fp+nGrfX9GhS5ljfSvGfPsjap13p3S4ypdGtHJvmtV5l/f8AYzv8amH69Tp3RKtC3hRVSNHPxVJPeUmY38+gdObjcyd7WX3FLOPy2R81edXv73KrXElB/ch8KOBR8mZjfut3PGdYx6F91iV0pUra3pWlu9nClHeS933OajCnjnLKUmo1c42SLySbyvhflGvTGre3oSs7iz6Y7inVxGosyivB48pOU25btnVVv7iVJUXLFNLTp8nJjcRKlkPgkhlZOw3CJAqyCXyQAAAAnBGD1bDpErmn6lWThB8YW7M5ZzCbrtw8OfNl44TdeZpDi0s4eM4yfRfUbS1g5SpqWPvVHk8S7ufrFX4cRpR2hFLCRjDk87078/xLwY7zvf45gAdXiAQSAAAADsAABAEgFkmwqMFlBs1jS8mmlRRLXXHj/WcYKIdVrgrUnjYybyTW1uXj1F5VW+WUbyQDWnO5WgADIAAiCe4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCQAAAEqTRZVWihAalsbxreS6qpnMEyaamddMkpowlHSI1Gi0pKSBdZMwQSVzAAAIJAAAACYLMgll4RvCOle4bxx3W1CsqDT9OMn7np0fpDUoL4aMfzPHckjOVTPBi4y+3eclw9PTvuu3F099MV4R5c6s6jzJlXySnuakk9OOfJcvYmA9hvyac0lovchcBPcDaJ69l9Hb28UZaFRpy4lU2z+B4ik0XnXq1XmrVqTf9U2xd/SzX2+iu+hUbCmtF9ZzqtbutUwo/KPf8TzZdOp156rnrVmn5zKX7I81JZ4RZEmN/Vuc/HovpXTYSz/rdtNePSmclajRpzxTnTqR/miZLDeMJ/NGqo0pLOhL5GpjWbnPxVUW1lU0aOwqyoqrGjmD21RkuSf4sMaKmV4nuXV1UUJQnS+GXOndfMllhLjXHOhUpNOSqQf9SI9Sov5WvkejDqU3iEZJvj4i9ScZQaqW9Fz5yljJnbXj+PNVTO0ouPujOWze+UzrbtZxyo1IP2eUY+iqk3GGHLspbZNbZ1WD2C3L1KMqf2oSjnuUxjhgMkNjDNKFvVuaip0oOUu+O3u/AFHKU8R5a4ZvbWla6n6VtSlUljL0o+j6b0C2tqeu8nGtUnjSo/ZX+T141aFupNRwqeYtxWM/gc7n+O049+3zFl9Hp11Gpc14Uact0l8Tf9kerbdEtKcFKdJzePvzyn+Xc3neKtH+BCMqecpwm4HPedTlRpVPUnCMpLEMS1v5mbcq1McY67ajRtNdWCjCnJLG2DyeqdbpVZ/CvVcdopv4V/k8296nUuoRprMKMFhRzz8zzpS1M3MfusZcn1GlxdVbmeqrNyxwuy+SMtSxhL8SCO5tz20S2J4WSurbCIbbwgjSm1pfklbsKm+UtyEzLpMutFTbfBnCemak4xl7NbG0sOJzlhl1du2NS3qrDowg/Y5KsdE2k8rsVzgN55JJoyz8p2Ikgkrmq+SCXyQAAAHb0u0V1dLX/tw3l7+x7dz1SharTnMv5Y9jxKVy7Wz009qlV5b8I45Nt5b3PPlxf3Mt5en1eL5c+LxePHP8r7rqveoVLyW/wwXETkAO+OMxmo+dycmXJlcs7ugAK5gAAAAAAQBICWTanSzyGscbVYU3I6IU0i0YpIltIxa9GOEhwjGrUwhUq42RzttvJZGc89dRDeWADTgAAIAAAAAAAAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQSAAAAAAAQSAATwAFAAEAAAAAAABWsElyTKquEZamyBpry61E5yyVyVRbBWdpwguSCQg9y0XhNMqFyABOdy2yWWwsm17ag7itGmpKOe8nsjW8tfqtd01NTWM6o8HJreS0ZZ7iey60t3LIgjUlyzTDVNF1Jrgw9RIlVY9s/kXaWOqMsolyOeEsPOHgtKqlyma8mfGryjGf2kmWjOrSXwtTj/LP/ACYKtFctr5lvWi1tJMl1fazc9InKLm5RTpt8wlx+BWKaWp8ou5KSw1lGLg0npeF4Zi4/jcy/XVWquq4yyltwjCc9XJEZOTw1hnbQoQptSqxTfh8RMXp0natl053eZ1JKlSW7k+X8j1Kc5WdGP1dpUXtiL3fzx/crTqYTi3JQfeE917nTChBvVOPqPhvjV4zg5279ukx16dHrKjGjOrc4pSynGa+1n9mYXdeagoas1IvaWPh52S8kObTqf9vlz7S04k/J4d3eynN06Uk1w5R2S9l/kSbXK6b3N3GnFU4S1zXLxsmedKTk2292RhJFJS7I6yacbdonPOy4KllDPOxZYS2KyootkNNFnLwE33AiL3Lpbpldk8pbhSbYHRRq6Ki1bxexStiFR4Wz3IccRcnwuPdnXKwlPpMLpPLTeV7Eajhm8xM8E5CKlu0MBgIEvggZAhkEgggkgkC05ann9PBQALbs7AAIAAAAAAAAAF6UNTz2CybaUafdm6RXUoorKskZ9vTNYxeUtKMKlXOyM5VHJlSyOeXJ+DeSACuKSCe4AAAAAAAA7gAAAAAAAAAABBICAAgkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFyABaWOxUAKAE4KiASCCMAkAQCQUFySIrcnAAAkCASMAQyMbFiUBXBOE02nuJcYRTJGtraW1yXhTX3v0Kwlthl1sVlOIrhEqX4EZAFm3yzKcm2Wm+xXS++y9wI1ZW+5GnPG5ZJfP5k4b2y0vZAZ7xZeMZNZbaXzKy05xHMn7msI4gs7p8kXTqttMUnJLbyTN1KlZenNZxjDMcOH3nhhVmpL90Z01t6EIzpwzUmvfTsa0Lmpqzq+CL1Sb4x7nlyuZKfnwvJnUuqlWHp5SjnOEuf8k8WvLTq6j1Od3N0qOY0uPeRyJKKEY6VnuVll/I3Jpzt2OWXhceSMxRGyJTS7FRLllbIrjyXxq7lXH3Ah4XBXJZoq0BBtFKMdT57IzUe74EpZCkpOWEexO+p0FGg45iopZPF7lnJy3f6kNtbmnGFTMHmEt0YkttoJFEEFmgkEQC2CGBXBBZkAQCWQAZBLIAAAgAAAQSAAAAJbmnqYWEZgNS6S5tkZACbAO4CAIJAAAAAAAAAAAAAAABAEgAACCQAQCAIAAAAAAAAAAAAAAAAAAAAAAAAAYAAnDGAIBbBGAIBJOCioJADGwJXAaAgE4GAAJIwAIwTgkCASRgCYokRWCQBBbAwBAJAEMJNvGC2AsxeU8AUksTxkrLHYmcnKTb5IwRUZNIz8meC2Co0ymSimrCwiMtgaJ4ZEpRby1llcY55CWpgXi0Rq1TS7e4w3suEMZXhruQWjBRntjfybfC4rb8CmPhTzl+xDk0/2QaWrSa+zjBnFYWe7J3b347IzlLLYTaJSzsvzCWle5KWFkrnJReDyn5KtsmHJDTy9giCSCUBeK2DJjwRLkCjGMbssklyQ9wIc/Y1trdV225JIxaITa3WwVetRlRnpkvxI+4iXXlKOmb1L37FAie5ZI66PTXc28alvWhOf3qecSRhOlOlJxqRcZLs0NmrGTQW5aSCQEFWXZVoCpBbBAEAACASAIBJAEAsGBUAEAAAAAAAGAAGBgAAAAAAAAAAAAAAAAAAAAAAAAAECUBAAAAnAwBAJwMAQCcACASCiASCCASMFEAkEEAkYAkAFE5IzkYAVBIGAgASBAJAEAkAAAAAABInAROQIwEi2Q8oAlktjBESwFcAtjI9N6c9gSW+lcE4ICAsRP7LLbYKz+yBm4YSfkhIsxyBUlIthLnkjOOAJSRLwirkWi4PZxfzCybV5ZdLPBTK1bPbyaxawEVWxZZcudiHjOxpFYQELK+XgnYNlGBEiuEtySrAh5l7EqK+YYi34AutiJTUZeRn2IeG+MgPUg+UNKe6/UjdbpJEPPfcC62IyI8E4AgYJwAKtEYNMFQM2gXaIALMXlPD8o1dzUksTk5LxLczZCWWBdyTCIaQQB8kMl/IgCMkMkYYFSCzIAgFiAIBIAhBkjsBUEgCASMAQQWwQBGScjBIEDIwABBOBgCCRgEEYBIwBAGCcAQBgkCAAAAwAAAAAAASiCUAwST+AKKgkgAACACSAJ2GwGCgBgYAAYJw8ZAgAAASkWUE/vYApgtpNY2+t4U1kv9TqeSbXTDTtyNCfc3+p1CPqlQbNMdC8jSvJv9TqMn6lVGzTmaQOn6jV9g7Gr7Daac6jnuQ44Oj6nV8G9t0+UqiU5Rh7sWyNY43K6jgwSo5eD2J9JrSfwypSivbByVun1IVGoL4fmZmcvp1z+PyYTeUZQtFJZdSPyMpUcPCeTf6lV8pE/UanlGtuOq5/Rl3aX4keklzJHT9Sq+xP1WouVF/gNmq5NHiSLKm5PCa/M7PQkvu0/wAUTGhN/dp/hHI2ac0rOrFZUdS/peTJprZnrQUaa+Oo4/Kl/wAnPKjZ5bdWeP8A4jZquahSVSSTnGK8s6J2tGEW1cxk/CRZULNtYrtL3izVWto+LiP4jZpw+mn95L5kaMr7f4Hq07CjNJxqKS9max6XTxnTKWPA2arx4W0Z/wDqY/8A6m07XRHGcv3jg9iHTVGPwaopi4tKlVJy0ya23JtqSx4lOynN4/bBhUpODafKPYdlXjl0lTi37nM+kXMnvKH/ANi7Z081QzwyHGUX4PUh0a6eylFfIv8A6HcvOZwz7jcNV5WlYy+Rs+x6j6HcL70CP9FrrmcBuGq8tx9i0XHGMbHpLo9b/wByCLR6LPHxVl+ERuLjbHkSj8Txx7EpHsx6PBSWuvLHtA6l0uzi3hTl82NxNWvAXyNI8HtfUqMXmEfzwYVLaCbWmLY2aeXJFGj0J2k0tkjGVrXXH7lTTja3JUTd2tbnQ2QoSp4dSnLGeAaY43wluMNex1unRck9L+SZFZKpCMaVNJpmfJ2y4fGW2uKSZC53Oj6tVbxp/Un6nXzjR+qNbcdMNsB9kdH1G4fFJ/mif9Pum8ei/wAwaYaEuJfmVbw8HV/p14v/AEZfmR/pt1/7D/Fkavf05sjJ307e8pxwreL92isrC6qScnTgn4i8DZcZrpxZG52Lply/uL8zSPR7lvfSvxG4zqvPwIx7vg9J9IrLmUEWXRp/+7D8mLYsn8PMlBNbPAwscnsR6XUUdKqQXl6clX0iT5rL8Imdxv8A5HlqEWsuol7YJ9KEd/Wj+TPXj0huKi6zSXgiXRo5y6kpfMvlGfGvGlhfeT+RCWcnsPpFGOMz58h9Iim8TQ8oeNeTGlKSyovB0SsJtJwX5tHW+mpPaqk/2FSxnJLVcvYbNPPdjWXLj+ZSVrOOMuP5nVKzgnvXyYyt6cf/AFX+Rds6YSouK3lH8GRoz95fiaulSyv4m3yDpUe1V/8A1KMdKzjUhpW/xE4UZprdEN5b2AjHuMLyABGPcEt+xGpfygCCc+yGfYCATq9l+Q1MCMDAyxkAAAAAAAAAQSAAAAgEgCASAIAAAEkAQCQACBKIP//Z" />
These are GGUF quantized versions of [sophosympatheia/Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3).
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf`
|
yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
|
yaneq
| 2024-02-07T06:10:46Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-07T06:10:43Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4
<Gallery />
## Model description
These are yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_zdRM8UdoamtJ6kdZKNKS_SDXL_LoRA_700_9d94_700_1e4/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 700
- learning_rate: 0.0001
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls: - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- gradient_accumulation_steps: 3
- GPU: T4
- duration: 5284.340887546539
|
saraswathi01/a2c-PandaPickAndPlace-v3
|
saraswathi01
| 2024-02-07T06:10:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T06:06:06Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
VishalMishraTss/deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train
|
VishalMishraTss
| 2024-02-07T06:08:11Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T05:07:47Z |
---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- recall
- f1
- precision
model-index:
- name: deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8703170028818443
- name: Recall
type: recall
value: 0.8703170028818443
- name: F1
type: f1
value: 0.8411548955923809
- name: Precision
type: precision
value: 0.8252839064351536
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-finetuned-ind-14-imbalanced-pan-10847-train
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4660
- Accuracy: 0.8703
- Recall: 0.8703
- F1: 0.8412
- Precision: 0.8253
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:|
| 0.7292 | 0.99 | 43 | 0.6759 | 0.7925 | 0.7925 | 0.7582 | 0.7420 |
| 0.5224 | 2.0 | 87 | 0.5146 | 0.8501 | 0.8501 | 0.8228 | 0.8057 |
| 0.5103 | 2.97 | 129 | 0.4916 | 0.8674 | 0.8674 | 0.8391 | 0.8244 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
logeeshanv/Llama-2-7b-chat-hf-sharded-bf16-5GB-fine-tuned-adapters
|
logeeshanv
| 2024-02-07T06:07:59Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB",
"region:us"
] | null | 2024-02-07T05:46:50Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16-5GB
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
shnl/llama2-13b-vicoqa
|
shnl
| 2024-02-07T06:03:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T06:01:57Z |
---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
rombodawg/DeepMagic-Coder-7b
|
rombodawg
| 2024-02-07T06:02:22Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T19:58:50Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
(Note: From short testing, the Alt version generated much better code)
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
shnl/llama2-13b-vimmrc2.0
|
shnl
| 2024-02-07T05:57:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:56:13Z |
---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
ChayanM/Image_Captioner
|
ChayanM
| 2024-02-07T05:57:48Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-02-04T17:43:12Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: Image_Captioner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Image_Captioner
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0923
- Rouge1: 25.0369
- Rouge2: 10.1572
- Rougel: 21.5244
- Rougelsum: 24.0775
- Gen Len: 18.9946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.253 | 1.0 | 836 | 0.1372 | 29.3958 | 12.2981 | 25.5129 | 27.9289 | 19.0 |
| 0.1361 | 2.0 | 1672 | 0.1151 | 25.8361 | 12.2894 | 23.7346 | 25.47 | 19.0 |
| 0.115 | 3.0 | 2508 | 0.1037 | 25.1859 | 11.9032 | 23.1038 | 24.8338 | 19.0 |
| 0.1027 | 4.0 | 3344 | 0.0942 | 26.0345 | 12.0324 | 23.4843 | 25.5426 | 19.0 |
| 0.0873 | 5.0 | 4180 | 0.0864 | 26.1657 | 11.685 | 23.6563 | 25.6247 | 19.0 |
| 0.0742 | 6.0 | 5016 | 0.0794 | 24.3621 | 10.5113 | 21.7192 | 23.8253 | 19.0 |
| 0.0646 | 7.0 | 5852 | 0.0740 | 24.711 | 11.194 | 22.2089 | 24.1793 | 19.0 |
| 0.0542 | 8.0 | 6688 | 0.0690 | 25.0339 | 10.8651 | 22.171 | 24.4106 | 19.0 |
| 0.046 | 9.0 | 7524 | 0.0650 | 25.0982 | 11.8399 | 22.701 | 24.623 | 18.9987 |
| 0.0386 | 10.0 | 8360 | 0.0623 | 26.2563 | 10.4715 | 22.5319 | 25.1412 | 18.9987 |
| 0.0317 | 11.0 | 9196 | 0.0591 | 26.4001 | 11.8031 | 23.1653 | 25.2856 | 18.9919 |
| 0.0273 | 12.0 | 10032 | 0.0587 | 25.6521 | 11.0174 | 22.7327 | 24.9068 | 18.9879 |
| 0.0231 | 13.0 | 10868 | 0.0583 | 26.7035 | 11.2021 | 23.0121 | 25.6384 | 18.9946 |
| 0.0195 | 14.0 | 11704 | 0.0592 | 25.5747 | 10.7424 | 22.3673 | 24.6944 | 19.0 |
| 0.0167 | 15.0 | 12540 | 0.0608 | 25.3022 | 10.163 | 21.9556 | 24.3587 | 18.9596 |
| 0.0142 | 16.0 | 13376 | 0.0614 | 25.0496 | 10.0656 | 21.7629 | 24.1094 | 18.9206 |
| 0.0119 | 17.0 | 14212 | 0.0618 | 26.0112 | 10.2519 | 22.1926 | 24.8873 | 18.8735 |
| 0.0102 | 18.0 | 15048 | 0.0653 | 25.6183 | 10.04 | 22.1136 | 24.5255 | 18.9125 |
| 0.0086 | 19.0 | 15884 | 0.0671 | 24.7352 | 9.6328 | 21.0675 | 23.7704 | 18.8694 |
| 0.0076 | 20.0 | 16720 | 0.0693 | 24.9512 | 9.6635 | 21.4761 | 23.9132 | 18.9112 |
| 0.0067 | 21.0 | 17556 | 0.0708 | 24.1732 | 9.158 | 20.3408 | 23.029 | 18.8358 |
| 0.0058 | 22.0 | 18392 | 0.0732 | 24.4503 | 9.4394 | 20.8584 | 23.4242 | 18.8035 |
| 0.0048 | 23.0 | 19228 | 0.0738 | 24.8844 | 9.9125 | 21.3509 | 23.9336 | 18.8089 |
| 0.0043 | 24.0 | 20064 | 0.0777 | 25.5401 | 10.1857 | 21.8328 | 24.4294 | 18.9058 |
| 0.0038 | 25.0 | 20900 | 0.0781 | 24.2235 | 9.0445 | 20.4463 | 23.0001 | 18.9166 |
| 0.0033 | 26.0 | 21736 | 0.0801 | 25.0127 | 9.8025 | 21.3116 | 23.9683 | 18.7308 |
| 0.0029 | 27.0 | 22572 | 0.0807 | 24.5765 | 9.6283 | 20.9556 | 23.4559 | 18.9166 |
| 0.0027 | 28.0 | 23408 | 0.0830 | 24.8389 | 9.8899 | 21.4027 | 23.9416 | 18.9233 |
| 0.0024 | 29.0 | 24244 | 0.0833 | 25.3695 | 10.162 | 21.7865 | 24.3737 | 18.7106 |
| 0.0022 | 30.0 | 25080 | 0.0832 | 24.8804 | 10.0825 | 21.4621 | 24.0326 | 18.9287 |
| 0.0021 | 31.0 | 25916 | 0.0853 | 25.0049 | 9.7036 | 21.3664 | 23.9173 | 18.9044 |
| 0.0019 | 32.0 | 26752 | 0.0855 | 25.0529 | 9.4994 | 21.2781 | 24.0076 | 18.9125 |
| 0.002 | 33.0 | 27588 | 0.0852 | 24.8417 | 9.9376 | 21.2526 | 23.8552 | 18.9031 |
| 0.0015 | 34.0 | 28424 | 0.0857 | 24.6359 | 9.5179 | 20.8941 | 23.4553 | 18.8937 |
| 0.0014 | 35.0 | 29260 | 0.0858 | 25.1156 | 10.1869 | 21.5805 | 23.9664 | 18.8156 |
| 0.0013 | 36.0 | 30096 | 0.0871 | 24.739 | 9.5548 | 21.15 | 23.749 | 18.9219 |
| 0.0011 | 37.0 | 30932 | 0.0884 | 24.774 | 9.7848 | 21.2467 | 23.833 | 18.9556 |
| 0.0011 | 38.0 | 31768 | 0.0889 | 25.2656 | 9.9796 | 21.517 | 24.1836 | 18.9462 |
| 0.0011 | 39.0 | 32604 | 0.0895 | 24.6627 | 9.3783 | 20.9288 | 23.5835 | 18.9704 |
| 0.001 | 40.0 | 33440 | 0.0906 | 25.1326 | 9.814 | 21.3593 | 24.0816 | 18.9260 |
| 0.0009 | 41.0 | 34276 | 0.0900 | 25.6889 | 10.3712 | 22.0588 | 24.695 | 18.9731 |
| 0.0008 | 42.0 | 35112 | 0.0911 | 24.6819 | 9.8307 | 21.1335 | 23.7053 | 18.9071 |
| 0.0008 | 43.0 | 35948 | 0.0905 | 24.4835 | 9.7292 | 21.017 | 23.5027 | 18.9623 |
| 0.0007 | 44.0 | 36784 | 0.0910 | 24.8203 | 9.5875 | 21.245 | 23.7718 | 18.9825 |
| 0.0007 | 45.0 | 37620 | 0.0914 | 25.1212 | 10.1024 | 21.6215 | 24.1061 | 18.9771 |
| 0.0006 | 46.0 | 38456 | 0.0914 | 25.1636 | 9.8127 | 21.5343 | 24.13 | 18.9475 |
| 0.0006 | 47.0 | 39292 | 0.0915 | 24.866 | 9.8427 | 21.3531 | 23.8643 | 18.9394 |
| 0.0006 | 48.0 | 40128 | 0.0916 | 25.064 | 10.049 | 21.5198 | 24.1158 | 18.9731 |
| 0.0005 | 49.0 | 40964 | 0.0923 | 24.8424 | 9.9718 | 21.3263 | 23.9031 | 18.9933 |
| 0.0005 | 50.0 | 41800 | 0.0923 | 25.0369 | 10.1572 | 21.5244 | 24.0775 | 18.9946 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.13.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.1
|
shnl/llama2-7b-vimmrc2.0
|
shnl
| 2024-02-07T05:55:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:54:02Z |
---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-13b-vimmrc1.0
|
shnl
| 2024-02-07T05:52:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:51:10Z |
---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-7b-vimmrc1.0
|
shnl
| 2024-02-07T05:50:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-7b",
"base_model:adapter:manhtt-079/llama-2-7b",
"region:us"
] | null | 2024-02-07T05:48:59Z |
---
library_name: peft
base_model: manhtt-079/llama-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
shnl/llama2-13b-viquad
|
shnl
| 2024-02-07T05:47:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:33:01Z |
---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
ideepankarsharma2003/AI_GenImageClassifier_MidJourney
|
ideepankarsharma2003
| 2024-02-07T05:45:48Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2024-01-30T11:28:52Z |
# **Not a MODEL, just a practice repo**
|
leoreigoto/teste_temp
|
leoreigoto
| 2024-02-07T05:35:47Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ybelkada/blip2-opt-2.7b-fp16-sharded",
"base_model:adapter:ybelkada/blip2-opt-2.7b-fp16-sharded",
"region:us"
] | null | 2024-02-03T02:57:02Z |
---
library_name: peft
base_model: ybelkada/blip2-opt-2.7b-fp16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
shnl/llama2-13b-vinewsqa
|
shnl
| 2024-02-07T05:27:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:manhtt-079/llama-2-13b",
"base_model:adapter:manhtt-079/llama-2-13b",
"region:us"
] | null | 2024-02-07T05:22:51Z |
---
library_name: peft
base_model: manhtt-079/llama-2-13b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.2
|
aitamilnadu/marabutamil
|
aitamilnadu
| 2024-02-07T05:25:30Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ta",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T05:14:02Z |
---
license: gpl-3.0
language:
- ta
inference:
parameters:
max_new_tokens: 250
repetition_penalty: 1.4
do_sample: True
temperature: 0.01 # Added to match the script's generation behavior
widget:
- text: |
இன்னாமை வேண்டின்
example_title: "Venba 1"
- text: |
பாடல்:
நின்றன நின்றன நில்லாகும்
example_title: "Venba 2"
- text: |
பாடல்:
துகள்தீர் பெருஞ்செல்வம்
example_title: "Venba 3"
- text: |
பாடல்:
கொங்குதேர் வாழ்க்கை அஞ்சிறைத் தும்பி
example_title: "Venba 4"
- text: |
பாடல்:
செல்வத்துட் செல்வம்
example_title: "Venba 5"
- text: |
வேதம் உரைத்தானும் வேதிய னாகிலன்
example_title: "Venba 6"
---
To experience this model in action, we encourage you to visit our demo space at [aitamilnadu/MarabuTamilDemo](https://huggingface.co/spaces/aitamilnadu/MarabuTamilDemo). Please note, the Inference API widget located on the right-hand side might occasionally produce unexpected results.
|
shazzz/ppo-LunarLander-v2
|
shazzz
| 2024-02-07T05:23:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T05:23:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 248.23 +/- 20.14
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
chenhaodev/mistral-7b-mmlu-v1
|
chenhaodev
| 2024-02-07T05:17:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T05:03:57Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-mmlu-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-mmlu-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_mmmlu dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-mmlu-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.47|± |0.0502|
|professional_medicine| 0|none | 0|acc | 0.79|± |0.0409|
|college_medicine | 0|none | 0|acc | 0.72|± |0.0451|
|clinical_knowledge | 0|none | 0|acc | 0.72|± |0.0451|
|aocnp |Yaml |none | 0|acc | 0.56|± |0.0499|
|ocn |Yaml |none | 0|acc | 0.66|± |0.0476|
|
theidoldaily/maki-nishikino
|
theidoldaily
| 2024-02-07T05:17:44Z | 7 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-02-05T05:18:09Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
defined eyes, masterpiece, high quality, defined pupil, looking at viewer,
rounded pupil,
parameters:
negative_prompt: >-
bad_anatomy, deformation, amputation, deformity, deformed_nipples,
duplicated_torso, deformed_torso, long_torso, large_torso,
unproportioned_torso, (deformed_pussy:1.2), (deformed_hands:1.2),
unproportioned_eyes, unproportioned_head, small_head, duplicated_nose,
big_nose, fusioned_clothes, fusioned_arms, undefined_limbs, divided_pussy,
red_pussy, duplicated_pussy, deformed_anus, deformed_pussy,
output:
url: demo-1.png
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: id_maki_nishikino
license: mit
---
# Maki Nishikino
<Gallery />
## Model description
This model was trained to generate high quality images based on SIFAS cards.
To achieve better quality, you should be using hako-mikan's regional prompter, along with Latent Mode, which modifies the way Stable Diffusion isolates the LoRA resulting in a significant improvement.
## Trigger words
You should use `id_maki_nishikino` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/theidoldaily/maki-nishikino/tree/main) them in the Files & versions tab.
|
happyxujin/a2c-PandaReachDense-v3
|
happyxujin
| 2024-02-07T05:11:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T05:07:17Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.22 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
karawalla/ship-ai-v1_release
|
karawalla
| 2024-02-07T05:03:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-05T20:23:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ybzz/detr-pothole-augment
|
ybzz
| 2024-02-07T04:56:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-02-07T04:56:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blaze999/finetuned-ner-conll
|
blaze999
| 2024-02-07T04:50:07Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-07T02:26:38Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned-ner-conll
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9285243741765481
- name: Recall
type: recall
value: 0.9488387748232918
- name: F1
type: f1
value: 0.9385716663892125
- name: Accuracy
type: accuracy
value: 0.9862247601106728
pipeline_tag: token-classification
widget:
- text: "Saketh Lives in India"
example_title: "Classification"
- text: "Apollo hospitals is in India"
example_title: "Classification"
- text: "Saketh works for Apollo"
example_title: "Classification"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ner-conll
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.9285
- Recall: 0.9488
- F1: 0.9386
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.218 | 1.0 | 878 | nan | 0.9080 | 0.9367 | 0.9221 | 0.9827 |
| 0.0449 | 2.0 | 1756 | nan | 0.9277 | 0.9485 | 0.9380 | 0.9857 |
| 0.0232 | 3.0 | 2634 | nan | 0.9285 | 0.9488 | 0.9386 | 0.9862 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
varun-v-rao/bert-large-cased-bn-adapter-3.17M-snli-model2
|
varun-v-rao
| 2024-02-07T04:46:51Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:22:08Z |
---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-bn-adapter-3.17M-snli-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-bn-adapter-3.17M-snli-model2
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
- Accuracy: 0.731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4017 | 1.0 | 8584 | 0.3327 | 0.8763 |
| 0.3769 | 2.0 | 17168 | 0.3069 | 0.8881 |
| 0.3641 | 3.0 | 25752 | 0.3005 | 0.8895 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AsphyXIA/baarat-hin-en-0.1
|
AsphyXIA
| 2024-02-07T04:46:11Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T04:46:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
varun-v-rao/t5-base-bn-adapter-1.79M-snli-model3
|
varun-v-rao
| 2024-02-07T04:42:15Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T02:16:46Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: t5-base-bn-adapter-1.79M-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-bn-adapter-1.79M-snli-model3
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7044
- Accuracy: 0.7455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 79
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.4101 | 1.0 | 8584 | 0.3336 | 0.8763 |
| 0.3814 | 2.0 | 17168 | 0.3112 | 0.8858 |
| 0.3695 | 3.0 | 25752 | 0.3061 | 0.8883 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ealvaradob/bert-phishing-url
|
ealvaradob
| 2024-02-07T04:36:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"dataset:ealvaradob/phishing-dataset",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-28T19:02:38Z |
---
license: apache-2.0
datasets:
- ealvaradob/phishing-dataset
---
<strong><span style="color:red">WARNING ...</span></strong>
This is **NOT** the final BERT model trained for phishing detection. It only corresponds to an evaluation of BERT performance against URL samples.
This model has the following performance in URL phishing detection:
- Accuracy: 0.976815
- Precision: 0.985979
- Recall: 0.964295
- AUC: 0.996684
👇¡CHECK BERT FINAL MODEL FINETUNED FOR PHISHING DETECTION ON THE FOLLOWING LINK!👇
_https://huggingface.co/ealvaradob/bert-finetuned-phishing_
|
Opensourced/wormgpt-24
|
Opensourced
| 2024-02-07T04:31:50Z | 0 | 6 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-02-07T04:21:04Z |
---
license: apache-2.0
---
from datasets import load_dataset
dataset = load_dataset("suriyagunasekar/stackoverflow-python-with-meta-data")
|
sneakykilli/Emirates_BERTopic
|
sneakykilli
| 2024-02-07T04:18:55Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-02-07T03:53:01Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Emirates_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Emirates_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 11
* Number of training documents: 375
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | emirates - airline - airlines - flights - refund | 9 | -1_emirates_airline_airlines_flights |
| 0 | emirates - airlines - airline - dubai - flights | 100 | 0_emirates_airlines_airline_dubai |
| 1 | airline - airlines - flights - aviation - planes | 68 | 1_airline_airlines_flights_aviation |
| 2 | emirates - meals - meal - attendant - airline | 35 | 2_emirates_meals_meal_attendant |
| 3 | emirates - refund - cancel - booking - ticket | 34 | 3_emirates_refund_cancel_booking |
| 4 | airline - refunded - refund - ticket - booking | 28 | 4_airline_refunded_refund_ticket |
| 5 | emirates - dubai - baggage - luggage - airline | 26 | 5_emirates_dubai_baggage_luggage |
| 6 | emirates - airline - refund - seats - flights | 26 | 6_emirates_airline_refund_seats |
| 7 | emirates - airlines - airline - booking - fees | 23 | 7_emirates_airlines_airline_booking |
| 8 | passengers - airline - emirates - stewardess - aisle | 14 | 8_passengers_airline_emirates_stewardess |
| 9 | emirates - delayed - dubai - delays - flights | 12 | 9_emirates_delayed_dubai_delays |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
sneakykilli/Qatar_BERTopic
|
sneakykilli
| 2024-02-07T04:18:52Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-02-07T03:52:25Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Qatar_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Qatar_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 22
* Number of training documents: 714
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | doha - qatar - airline - airlines - refund | 5 | -1_doha_qatar_airline_airlines |
| 0 | doha - qatar - airline - airlines - flights | 211 | 0_doha_qatar_airline_airlines |
| 1 | refund - refunded - refunds - booking - voucher | 78 | 1_refund_refunded_refunds_booking |
| 2 | doha - qatar - baggage - luggage - airline | 72 | 2_doha_qatar_baggage_luggage |
| 3 | airline - passengers - flights - attendant - steward | 49 | 3_airline_passengers_flights_attendant |
| 4 | qatar - airline - airlines - flights - carriers | 44 | 4_qatar_airline_airlines_flights |
| 5 | baggage - doha - airlines - airline - luggage | 39 | 5_baggage_doha_airlines_airline |
| 6 | airline - airlines - flights - emirates - flight | 35 | 6_airline_airlines_flights_emirates |
| 7 | refund - airline - flights - flight - cancel | 32 | 7_refund_airline_flights_flight |
| 8 | airline - airlines - seats - qatar - seating | 28 | 8_airline_airlines_seats_qatar |
| 9 | qatar - doha - airlines - flights - emirates | 18 | 9_qatar_doha_airlines_flights |
| 10 | customer - complaints - service - terrible - horrible | 17 | 10_customer_complaints_service_terrible |
| 11 | qatar - complaint - doha - complaints - airline | 15 | 11_qatar_complaint_doha_complaints |
| 12 | avios - qatar - booking - compensation - aviso | 14 | 12_avios_qatar_booking_compensation |
| 13 | airline - airlines - flight - airplane - horrible | 9 | 13_airline_airlines_flight_airplane |
| 14 | doha - qatar - flights - cancellation - airlines | 8 | 14_doha_qatar_flights_cancellation |
| 15 | doha - qatar - qatari - emirates - flight | 8 | 15_doha_qatar_qatari_emirates |
| 16 | doha - qatar - airlines - bangkok - airport | 8 | 16_doha_qatar_airlines_bangkok |
| 17 | seats - seating - airline - booked - seat | 7 | 17_seats_seating_airline_booked |
| 18 | qatar - opodo - airline - refunded - voucher | 6 | 18_qatar_opodo_airline_refunded |
| 19 | doha - qatar - flight - destinations - airways | 6 | 19_doha_qatar_flight_destinations |
| 20 | qatar - airlines - disability - flight - wheelchair | 5 | 20_qatar_airlines_disability_flight |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
sneakykilli/Singapore_BERTopic
|
sneakykilli
| 2024-02-07T04:18:48Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-02-07T03:52:40Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# Singapore_BERTopic
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("sneakykilli/Singapore_BERTopic")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 10
* Number of training documents: 160
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | airline - airlines - flights - refund - flight | 6 | -1_airline_airlines_flights_refund |
| 0 | airline - airlines - flights - singapore - meals | 31 | 0_airline_airlines_flights_singapore |
| 1 | refund - airline - airlines - complaint - singapore | 43 | 1_refund_airline_airlines_complaint |
| 2 | baggage - luggage - airlines - airline - bags | 20 | 2_baggage_luggage_airlines_airline |
| 3 | airlines - passengers - seats - flight - cabin | 14 | 3_airlines_passengers_seats_flight |
| 4 | refund - repayment - sia - customer - complaints | 11 | 4_refund_repayment_sia_customer |
| 5 | airlines - airline - fees - singapore - flights | 10 | 5_airlines_airline_fees_singapore |
| 6 | refund - airline - cancellation - booking - cancel | 9 | 6_refund_airline_cancellation_booking |
| 7 | miles - airlines - airline - mileage - loyalty | 9 | 7_miles_airlines_airline_mileage |
| 8 | airline - flight - reviews - booking - customer | 7 | 8_airline_flight_reviews_booking |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 5
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.24.3
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.0.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.3.1
* Transformers: 4.36.2
* Numba: 0.57.1
* Plotly: 5.16.1
* Python: 3.10.12
|
wentingzhao/question-evaluator
|
wentingzhao
| 2024-02-07T04:12:53Z | 4 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-05T04:50:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenhaodev/mistral-7b-medmcqa-inst-v1
|
chenhaodev
| 2024-02-07T04:06:07Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T03:31:34Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-medmcqa-inst-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-medmcqa-inst-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medmcqa_instruct dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medmcqa-inst-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.48|± |0.0502|
|professional_medicine| 0|none | 0|acc | 0.61|± |0.0490|
|college_medicine | 0|none | 0|acc | 0.57|± |0.0498|
|clinical_knowledge | 0|none | 0|acc | 0.65|± |0.0479|
|ocn |Yaml |none | 0|acc | 0.68|± |0.0469|
|aocnp |Yaml |none | 0|acc | 0.56|± |0.0499|
### Original Performance (mistralai/Mistral-7B-v0.1)
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|medmcqa |Yaml |none | 0|acc | 0.45|± |0.0500|
|professional_medicine| 0|none | 0|acc | 0.64|± |0.0482|
|college_medicine | 0|none | 0|acc | 0.65|± |0.0479|
|clinical_knowledge | 0|none | 0|acc | 0.68|± |0.0469|
|ocn |Yaml |none | 0|acc | 0.62|± |0.0488|
|aocnp |Yaml |none | 0|acc | 0.47|± |0.0502|
|
chenhaodev/mistral-7b-medwiki-v1
|
chenhaodev
| 2024-02-07T04:05:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-06T09:26:37Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-medwiki-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-medwiki-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_wikidoc dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Perfromance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medwiki-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.99|± |0.0100|
|professional_medicine| 0|none | 0|acc | 0.57|± |0.0498|
|college_medicine | 0|none | 0|acc | 0.59|± |0.0494|
|clinical_knowledge | 0|none | 0|acc | 0.58|± |0.0496|
|medmcqa |Yaml |none | 0|acc | 0.40|± |0.0492|
|ocn |Yaml |none | 0|acc | 0.61|± |0.0490|
|aocnp |Yaml |none | 0|acc | 0.52|± |0.0502|
### Original Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|professional_medicine| 0|none | 0|acc | 0.64|± |0.0482|
|college_medicine | 0|none | 0|acc | 0.65|± |0.0479|
|clinical_knowledge | 0|none | 0|acc | 0.68|± |0.0469|
|medmcqa |Yaml |none | 0|acc | 0.45|± |0.0500|
|ocn |Yaml |none | 0|acc | 0.62|± |0.0488|
|aocnp |Yaml |none | 0|acc | 0.47|± |0.0502|
|
LoneStriker/DeepMagic-Coder-7b-GPTQ
|
LoneStriker
| 2024-02-07T03:57:36Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:55:46Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
varun-v-rao/opt-350m-snli-model3
|
varun-v-rao
| 2024-02-07T03:52:40Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:finetune:facebook/opt-350m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T02:00:23Z |
---
license: other
base_model: facebook/opt-350m
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-350m-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m-snli-model3
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9962
- Accuracy: 0.7495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 74
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3313 | 1.0 | 2146 | 0.2725 | 0.8994 |
| 0.2398 | 2.0 | 4292 | 0.2611 | 0.9070 |
| 0.1536 | 3.0 | 6438 | 0.2971 | 0.9071 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
frntcx/Reinforce
|
frntcx
| 2024-02-07T03:50:28Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T03:50:21Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 348.70 +/- 57.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
humung/koalpaca-polyglot-12.8B-lora-vlending-v0.1
|
humung
| 2024-02-07T03:49:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T03:49:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bianxg/q-FrozenLake-v1-4x4-noSlippery
|
bianxg
| 2024-02-07T03:45:49Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T03:45:44Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bianxg/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
car13mesquita/bert-finetuned-sem_eval-rest14-english-2
|
car13mesquita
| 2024-02-07T03:30:42Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-07T02:51:04Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-rest14-english-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-rest14-english-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0972
- F1: 0.3594
- Accuracy: 0.6088
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| No log | 1.0 | 127 | 0.2075 | 0.0 | 0.0 |
| No log | 2.0 | 254 | 0.1641 | 0.0802 | 0.2338 |
| No log | 3.0 | 381 | 0.1376 | 0.1519 | 0.395 |
| 0.1978 | 4.0 | 508 | 0.1233 | 0.1850 | 0.4213 |
| 0.1978 | 5.0 | 635 | 0.1115 | 0.2654 | 0.5238 |
| 0.1978 | 6.0 | 762 | 0.1052 | 0.3145 | 0.565 |
| 0.1978 | 7.0 | 889 | 0.1023 | 0.3371 | 0.5787 |
| 0.0922 | 8.0 | 1016 | 0.0988 | 0.3549 | 0.6025 |
| 0.0922 | 9.0 | 1143 | 0.0980 | 0.3561 | 0.6 |
| 0.0922 | 10.0 | 1270 | 0.0972 | 0.3594 | 0.6088 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoneStriker/DeepMagic-Coder-7b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T03:29:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:27:46Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LoneStriker/DeepMagic-Coder-7b-4.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T03:27:43Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:26:09Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
LoneStriker/DeepMagic-Coder-7b-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-07T03:26:07Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:24:51Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
theofcks/MATUE30PRAUM
|
theofcks
| 2024-02-07T03:25:17Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-02-07T03:25:15Z |
---
license: other
license_name: nothing
license_link: LICENSE
---
|
trinath/LunarLander-v5
|
trinath
| 2024-02-07T03:23:09Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T03:21:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.79 +/- 17.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gotchu/season-8-v2-solar
|
gotchu
| 2024-02-07T03:21:41Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:bhavinjawade/SOLAR-10B-Nector-DPO-Jawade",
"base_model:merge:bhavinjawade/SOLAR-10B-Nector-DPO-Jawade",
"base_model:bhavinjawade/SOLAR-10B-OrcaDPO-Jawade",
"base_model:merge:bhavinjawade/SOLAR-10B-OrcaDPO-Jawade",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T03:15:50Z |
---
base_model:
- bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
- bhavinjawade/SOLAR-10B-Nector-DPO-Jawade
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [bhavinjawade/SOLAR-10B-OrcaDPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-OrcaDPO-Jawade)
* [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 48]
model:
model:
path: bhavinjawade/SOLAR-10B-Nector-DPO-Jawade
- layer_range: [0, 48]
model:
model:
path: bhavinjawade/SOLAR-10B-OrcaDPO-Jawade
```
|
LoneStriker/DeepMagic-Coder-7b-GGUF
|
LoneStriker
| 2024-02-07T03:19:15Z | 8 | 5 | null |
[
"gguf",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T03:03:17Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
---
DeepMagic-Coder-7b
Alternate version:
- https://huggingface.co/rombodawg/DeepMagic-Coder-7b-Alt

This is an extremely successful merge of the deepseek-coder-6.7b-instruct and Magicoder-S-DS-6.7B models, bringing an uplift in overall coding performance without any compromise to the models integrity (at least with limited testing).
This is the first of my models to use the merge-kits *task_arithmetic* merging method. The method is detailed bellow, and its clearly very usefull for merging ai models that were fine-tuned from a common base:
Task Arithmetic:
```
Computes "task vectors" for each model by subtracting a base model.
Merges the task vectors linearly and adds back the base.
Works great for models that were fine tuned from a common ancestor.
Also a super useful mental framework for several of the more involved
merge methods.
```
The original models used in this merge can be found here:
- https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
- https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct
The Merge was created using Mergekit and the paremeters can be found bellow:
```yaml
models:
- model: deepseek-ai_deepseek-coder-6.7b-instruct
parameters:
weight: 1
- model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
weight: 1
merge_method: task_arithmetic
base_model: ise-uiuc_Magicoder-S-DS-6.7B
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
asadmasad/output-67b-11k-test
|
asadmasad
| 2024-02-07T03:18:20Z | 4 | 1 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"text-generation",
"conversational",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T01:38:20Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
model-index:
- name: output-67b-11k-test
results: []
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output-67b-11k-test
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0051 | 1.0 | 1 | 0.0813 |
| 0.0051 | 2.0 | 2 | 0.0813 |
| 0.0051 | 3.0 | 3 | 0.0811 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Sacbe/ViT_SAM_Classification
|
Sacbe
| 2024-02-07T03:17:54Z | 0 | 0 |
transformers
|
[
"transformers",
"biology",
"image-classification",
"arxiv:2010.11929",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-07T02:31:37Z |
---
license: apache-2.0
metrics:
- accuracy
- f1
- precision
- recall
library_name: transformers
pipeline_tag: image-classification
tags:
- biology
---
# Resumen
El modelo fue entrenado usando el modelo base de VisionTransformer junto con el optimizador SAM de Google y la función de perdida Negative log likelihood, sobre los datos [Wildfire](https://drive.google.com/file/d/1TlF8DIBLAccd0AredDUimQQ54sl_DwCE/view?usp=sharing). Los resultados muestran que el clasificador alcanzó una precisión del 97% con solo 10 épocas de entrenamiento.
La teoría de se muestra a continuación.

# VisionTransformer
**Attention-based neural networks such as the Vision Transformer** (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively. While the laws for scaling Transformer language models have been studied, it is unknown how Vision Transformers scale. To address this, we scale ViT models and data, both up and down, and characterize the relationships between error rate, data, and compute. Along the way, we refine the architecture and training of ViT, reducing memory consumption and increasing accuracy of the resulting models. As a result, we successfully train a ViT model with two billion parameters, which attains a new state-of-the-art on ImageNet of 90.45% top-1 accuracy. The model also performs well for few-shot transfer, for example, reaching 84.86% top-1 accuracy on ImageNet with only 10 examples per class.
[1] A. Dosovitskiy et al., “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”. arXiv, el 3 de junio de 2021. Consultado: el 12 de noviembre de 2023. [En línea]. Disponible en: http://arxiv.org/abs/2010.11929
# Sharpness Aware Minimization (SAM)
SAM simultaneously minimizes loss value and loss sharpness. In particular, it seeks parameters that lie in neighborhoods having uniformly low loss. SAM improves model generalization and yields SoTA performance for several datasets. Additionally, it provides robustness to label noise on par with that provided by SoTA procedures that specifically target learning with noisy labels.

*ResNet loss landscape at the end of training with and without SAM. Sharpness-aware updates lead to a significantly wider minimum, which then leads to better generalization properties.*
[2] P. Foret, A. Kleiner, y H. Mobahi, “Sharpness-Aware Minimization For Efficiently Improving Generalization”, 2021.
# The negative log likelihood loss
It is useful to train a classification problem with $C$ classes.
If provided, the optional argument weight should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch, $C$ ) or ( minibatch, $C, d_1, d_2, \ldots, d_K$ ) with $K \geq 1$ for the $K$-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images.
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range $\[0, C-1\]$ where $C$ number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with reduction set to 'none ') loss can be described as:
$$
\ell(x, y)=L=\left\{l_1, \ldots, l_N\right\}^{\top}, \quad l_n=-w_{y_n} x_{n, y_n}, \quad w_c=\text { weight }[c] \cdot 1
$$
where $x$ is the input, $y$ is the target, $w$ is the weight, and $N$ is the batch size. If reduction is not 'none' (default 'mean'), then
$$
\ell(x, y)= \begin{cases}\sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text { if reduction }=\text { 'mean' } \\ \sum_{n=1}^N l_n, & \text { if reduction }=\text { 'sum' }\end{cases}
$$
# Resultados obtenidos
<img src="https://cdn-uploads.huggingface.co/production/uploads/64ff2131f7f3fa2d7fe256fc/CO6vFEjt3FkxB8JgZTbEd.png" width="500" />
|
ambrosfitz/tinyllama-history-chat_v0.1
|
ambrosfitz
| 2024-02-07T03:16:49Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T17:55:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Deepnoid/OPEN-SOLAR-KO-10.7B
|
Deepnoid
| 2024-02-07T03:11:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:beomi/OPEN-SOLAR-KO-10.7B",
"base_model:finetune:beomi/OPEN-SOLAR-KO-10.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T01:46:52Z |
---
license: apache-2.0
base_model: beomi/OPEN-SOLAR-KO-10.7B
tags:
- generated_from_trainer
model-index:
- name: beomidpo-out-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: beomi/OPEN-SOLAR-KO-10.7B
load_in_8bit: false
load_in_4bit: false
strict: false
rl: dpo
datasets:
- path: datasets/dposet/dpodatav2.jsonl
ds_type: json
data_files:
- datasets/dposet/dpodatav2.jsonl
split: train
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./beomidpo-out-v2
adapter: lora
lora_model_dir:
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
lora_r: 8
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5
train_on_inputs: false
group_by_length: false
bf16: false
fp16: true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 100
save_total_limit: 3
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
save_safetensors: false
```
</details><br>
# beomidpo-out-v2
This model is a fine-tuned version of [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 2645
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
chenhaodev/mistral-7b-medqa-v1
|
chenhaodev
| 2024-02-07T03:05:03Z | 3 | 1 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:other",
"region:us"
] | null | 2024-02-07T02:28:34Z |
---
license: other
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral-7b-medqa-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-medqa-v1
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the medical_meadow_medqa dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.1+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
### Performance
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True,peft=chenhugging/mistral-7b-medqa-v1), gen_kwargs: (None), limit: 100.0, num_fewshot: None
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|ocn |Yaml |none | 0|acc | 0.71|± |0.0456|
|professional_medicine| 0|none | 0|acc | 0.69|± |0.0465|
|college_medicine | 0|none | 0|acc | 0.61|± |0.0490|
|clinical_knowledge | 0|none | 0|acc | 0.63|± |0.0485|
|medmcqa |Yaml |none | 0|acc | 0.41|± |0.0494|
|aocnp |Yaml |none | 0|acc | 0.61|± |0.0490|
### Appendix (original performance before lora-finetune)
hf (pretrained=mistralai/Mistral-7B-v0.1,parallelize=True,load_in_4bit=True), gen_kwargs: (None), limit: 100.0, num_fewshot: None, batch_size: 1
| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|
|---------------------|-------|------|-----:|--------|----:|---|-----:|
|pubmedqa | 1|none | 0|acc | 0.98|± |0.0141|
|ocn |Yaml |none | 0|acc | 0.62|± |0.0488|
|professional_medicine| 0|none | 0|acc | 0.64|± |0.0482|
|college_medicine | 0|none | 0|acc | 0.65|± |0.0479|
|clinical_knowledge | 0|none | 0|acc | 0.68|± |0.0469|
|medmcqa |Yaml |none | 0|acc | 0.45|± |0.0500|
|aocnp |Yaml |none | 0|acc | 0.47|± |0.0502|
|
Peverell/mnist-resnet18
|
Peverell
| 2024-02-07T03:02:19Z | 4 | 0 |
transformers
|
[
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-02-07T02:52:40Z |
Dataset: MNIST
Model-architecture: ResNet-18
training accuracy: 0.9988
testing accuracy: 0.9934
|
janhq/stealth-finance-v1-GGUF
|
janhq
| 2024-02-07T03:00:25Z | 5 | 1 | null |
[
"gguf",
"en",
"base_model:jan-hq/stealth-finance-v1",
"base_model:quantized:jan-hq/stealth-finance-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-07T02:45:24Z |
---
license: apache-2.0
language:
- en
base_model: jan-hq/stealth-finance-v1
model_creator: jan-hq
model_name: stealth-finance-v1
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [jan-hq/stealth-finance-v1](https://huggingface.co/jan-hq/stealth-finance-v1)
- Model creator: [jan-hq](https://huggingface.co/jan-hq)
- Original model: [stealth-finance-v1](https://huggingface.co/jan-hq/stealth-finance-v1)
- Model description: [Readme](https://huggingface.co/jan-hq/stealth-finance-v1/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
vikhyatk/moondream1
|
vikhyatk
| 2024-02-07T02:57:53Z | 76,449 | 487 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"moondream1",
"text-generation",
"custom_code",
"en",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-20T18:10:04Z |
---
language:
- en
---
# 🌔 moondream1
1.6B parameter model built by [@vikhyatk](https://x.com/vikhyatk) using SigLIP, Phi-1.5 and the LLaVa training dataset.
The model is release for research purposes only, commercial use is not allowed.
Try it out on [Huggingface Spaces](https://huggingface.co/spaces/vikhyatk/moondream1)!
**Usage**
```
pip install transformers timm einops
```
```python
from transformers import AutoModelForCausalLM, CodeGenTokenizerFast as Tokenizer
from PIL import Image
model_id = "vikhyatk/moondream1"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = Tokenizer.from_pretrained(model_id)
image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "<QUESTION>", tokenizer))
```
## Benchmarks
| Model | Parameters | VQAv2 | GQA | TextVQA |
| --- | --- | --- | --- | --- |
| LLaVA-1.5 | 13.3B | 80.0 | 63.3 | 61.3 |
| LLaVA-1.5 | 7.3B | 78.5 | 62.0 | 58.2 |
| **moondream1** | 1.6B | 74.7 | 57.9 | 35.6 |
## Examples
| Image | Examples |
| --- | --- |
|  | **What is the title of this book?**<br>The Little Book of Deep Learning<br><br>**What can you tell me about this book?**<br>The book in the image is titled "The Little Book of Deep Learning." It appears to be a guide or manual that provides information and instructions on the subject of deep learning. The book is described as being small and thin, making it easy to carry and read. It is set in a room with a view outside, suggesting that the person holding the book might be in a living or working space with a view. The content of the book is likely to be complex, as the title implies a focus on advanced concepts in deep learning.<br><br>**Where is the person standing?**<br> The person is standing on a balcony or deck, which is outside. |
|  | **What type of food is the girl holding?**<br>The girl is holding a hamburger.<br><br>**What color is the woman's hair?**<br>The woman's hair is white.<br><br>**What is the girl doing in the image?**<br>The girl in the image is eating a hamburger. |
|  | **What color is the train?**<br>The train is red.<br><br>**What is the bus doing in the image?**<br> The bus in the image is driving down a city street, passing through an intersection.<br><br>**What kind of public transportation is in the image?**<br>The image features a red train or trolley as the public transportation option. |
|  | **What is the girl looking at?**<br>The girl is looking at her reflection in the mirror.<br><br>**What color is the woman's hair?**<br>The woman's hair is brown.<br><br>**What is the woman doing in the mirror?**<br>The woman is adjusting her hair in the mirror. |
|  | **What might be the reason for the dog's aggressive behavior?**<br>The dog's aggressive behavior, including its growling, baring its teeth, and showing its sharp teeth, could be due to various reasons. It is possible that the dog is feeling threatened, scared, or defensive about something in its environment, such as another animal, a loud noise, or a person nearby. Alternatively, the dog might be reacting to a command it received from its owner, or it could be experiencing discomfort or pain, which is causing it to display aggressive behavior. It is essential to observe the dog's body language and context to determine the underlying cause of its aggressive behavior and respond accordingly to address its needs or concerns.<br><br>**What kind of dog is in the picture?**<br>The dog in the picture is a small, black and white chihuahua. |
|
rhplus0831/maid-yuzu-v5
|
rhplus0831
| 2024-02-07T02:52:28Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-06T18:20:26Z |
This model was created because I was curious about whether the 8X7B model created randomly by the user would be merged with other existing 8x7b models.
Was this not suitable for the MoE's design? A problem occurred during the quantization process
|
Krisbiantoro/merged_mixtral_id
|
Krisbiantoro
| 2024-02-07T02:42:24Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mixtral",
"arxiv:1910.09700",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-01-25T04:23:59Z |
---
library_name: peft
base_model: mistralai/Mixtral-8x7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
SparseLLM/reglu-95B
|
SparseLLM
| 2024-02-07T02:34:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:12:12Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-90B
|
SparseLLM
| 2024-02-07T02:34:26Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:06:32Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-80B
|
SparseLLM
| 2024-02-07T02:33:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T06:59:29Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-70B
|
SparseLLM
| 2024-02-07T02:31:59Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T06:44:43Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-45B
|
SparseLLM
| 2024-02-07T02:30:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T06:18:00Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-40B
|
SparseLLM
| 2024-02-07T02:30:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T05:47:31Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/reglu-15B
|
SparseLLM
| 2024-02-07T02:29:03Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T05:29:40Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-95B
|
SparseLLM
| 2024-02-07T02:27:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:38:45Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
mathreader/ppo-LunarLander-v2
|
mathreader
| 2024-02-07T02:26:22Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-07T02:26:04Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.96 +/- 13.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SparseLLM/swiglu-10B
|
SparseLLM
| 2024-02-07T02:23:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:26:59Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-25B
|
SparseLLM
| 2024-02-07T02:22:10Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:08:49Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-30B
|
SparseLLM
| 2024-02-07T02:21:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:02:46Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-35B
|
SparseLLM
| 2024-02-07T02:21:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T14:00:50Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-40B
|
SparseLLM
| 2024-02-07T02:21:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T13:58:26Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-65B
|
SparseLLM
| 2024-02-07T02:20:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T13:36:56Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/swiglu-70B
|
SparseLLM
| 2024-02-07T02:19:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T13:34:59Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-10B
|
SparseLLM
| 2024-02-07T02:17:35Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:20:07Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-5B
|
SparseLLM
| 2024-02-07T02:17:02Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:15:10Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-25B
|
SparseLLM
| 2024-02-07T02:16:49Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:31:21Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-20B
|
SparseLLM
| 2024-02-07T02:16:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:26:23Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-30B
|
SparseLLM
| 2024-02-07T02:15:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:33:37Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
SparseLLM/relu2-40B
|
SparseLLM
| 2024-02-07T02:15:46Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"arxiv:2402.03804",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-14T07:39:52Z |
---
language:
- en
library_name: transformers
license: llama2
---
### Background
Sparse computation is increasingly recognized as an important direction in enhancing the computational efficiency of large language models (LLMs).
Prior research has demonstrated that LLMs utilizing the ReLU activation function exhibit sparse activations. Interestingly, our findings indicate that models based on SwiGLU also manifest sparse activations.
This phenomenon prompts an essential question: Which activation function is optimal for sparse LLMs? Although previous works on activation function selection have focused on the performance of LLMs, we argue that the efficiency of sparse computation should also be considered so that the LLMs can proceed with efficient inference while preserving performance.
To answer this question, we pretrain 4 LLMs with different activation functions, including ReLU, SwiGLU, ReGLU, and Squared ReLU to do more comprehensive experiments.
### Dataset
We pretrain the model on 100 billion tokens, including:
* Refinedweb
* SlimPajama
### Training Hyper-parameters
| Parameter | Value |
|-----------------------|-------------|
| Batch_Size | 4M |
| GPUs | 64xA100(80G)|
| LR_Scheduler | cosine |
| LR | 3e-4 |
### Citation:
Please kindly cite using the following BibTeX:
```bibtex
@article{zhang2024relu2,
title={ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs},
author={Zhengyan Zhang and Yixin Song and Guanghui Yu and Xu Han and Yankai Lin and Chaojun Xiao and Chenyang Song and Zhiyuan Liu and Zeyu Mi and Maosong Sun},
journal = {arXiv preprint arXiv:2402.03804},
year={2024},
}
```
|
hxgrace/model_6_20
|
hxgrace
| 2024-02-07T02:15:16Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-11T02:58:10Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-hxgrace/model_6_20
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning, based on the dataset found at [hxgrace/augmentedSketches](https://huggingface.co/datasets/hxgrace/augmentedSketches). It was trained with a batch size of 6 over 20 epochs.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.