modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
Coooori/llama_checkpoint-2100
|
Coooori
| 2024-01-20T20:56:05Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T20:56:03Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Coooori/llama_checkpoint-2000
|
Coooori
| 2024-01-20T20:35:16Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T20:35:13Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
jlbaker361/vanilla-ddpo25
|
jlbaker361
| 2024-01-20T20:34:54Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T05:05:35Z
|
---
{}
---
# DDPO trained model
num_epochs=1
train_gradient_accumulation_steps=4
sample_num_steps=30
sample_batch_size=4
train_batch_size=4
sample_num_batches_per_epoch=32
|
wolferobert3/mistral_factcheck_four_bit_v2
|
wolferobert3
| 2024-01-20T20:07:36Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T20:07:21Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Coooori/llama_checkpoint-1800
|
Coooori
| 2024-01-20T19:53:28Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T19:53:26Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Zeze24/ppo-LunarLander-v2
|
Zeze24
| 2024-01-20T19:50:41Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T18:36:01Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.94 +/- 27.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
PsySpy/ANFSSD
|
PsySpy
| 2024-01-20T19:49:03Z
| 0
| 1
| null |
[
"art",
"text-to-image",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-image
| 2024-01-20T18:59:55Z
|
---
license: cc-by-nc-4.0
pipeline_tag: text-to-image
tags:
- art
---
Model - And Now For Something Slightly Different

|
aserrasastre/Mistral-7B-def-Instruct-v0.1
|
aserrasastre
| 2024-01-20T19:40:22Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:aserrasastre/Mistral-7B-def",
"base_model:adapter:aserrasastre/Mistral-7B-def",
"region:us"
] | null | 2024-01-20T14:50:08Z
|
---
library_name: peft
base_model: aserrasastre/Mistral-7B-def
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B
|
charlesdedampierre
| 2024-01-20T19:38:05Z
| 14
| 6
|
Bunkatopics
|
[
"Bunkatopics",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:bunkalab/topic_based_chatml_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-12T13:15:10Z
|
---
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
datasets:
- bunkalab/topic_based_chatml_dpo_pairs
library_name: Bunkatopics
widget:
- text: Tell a danish joke in french
pipeline_tag: text-generation
---

## Model description
TopicNeuralHermes 2.5 Mistral 7B is a refined model developed through fine-tuning with a specific subset of data, selected via Topic Modeling Techniques using [Bunkatopics](https://github.com/charlesdedampierre/BunkaTopics), as a continuing from [OpenHermes 2.5](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B).
The model was trained on a refined DPO dataset. The objective was to train the model on a small portion of the DPO data. To achieve this, we compared two datasets used to train the reward model: the rejected Llama answers and the accepted ChatGPT answers from the [DPO dataset](mlabonne/chatml_dpo_pairs).
We then conducted topic modeling on both datasets, keeping only the topics that existed in the accepted dataset but not in the rejected one.
Our hypothesis is that these topics encapsulate the main differences between the two answering styles.
This method allows for quicker convergence with significantly less data (around 1/6 of the initial dataset). The Dataset can be found at [bunkalab/topic_based_chatml_dpo_pairs](https://huggingface.co/datasets/bunkalab/topic_based_chatml_dpo_pairs)
Special thanks to [mlabonne](https://huggingface.co/mlabonne) for creating the [colab notebook](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing#scrollTo=YpdkZsMNylvp) that facilitated the DPO Strategy.
Results of the model can be found here: We do as well as similar models with way less data and computing power :)

## Topic Analysis
We applied the topic modeling method to both datasets, extracting 30 topics from each.
These topics were characterized using the 10 most specific unigrams or bigrams.
We then compared the two sets of topics (30 from each dataset) and retained those in the accepted dataset that shared fewer than 2 terms with any topic in the rejected dataset
We found the 13 distinctive following topics described by 10 terms each:
**Emotional Dynamics**: feelings, Quinn, Austin, minority women, teaching, schools, individual, personality, backgrounds, triggers.
**Global Knowledge Queries**: question, information, geography, news articles, Step, answer, capital city, pipeline system, country, analogy.
**Digital Interactions and Queries**: questions, question, PersonX, modem, answers, effect relationship, Quora, browser, answer, e-commerce.
**Business and Cybersecurity**: email, businesses, initiatives, innovation, advertising papers, spam, breaches, antivirus, payments, prospects.
**Lifestyle and Wellness**: sleep, exercise, gifts, shopping, Casey, stores, stress, headaches, options, mood.
**Wildlife Ecology**: birds, prey, animals, species, infection, nest, eggs, bacteria, insects, kitty condo.
**Environmental Science and Climate**: temperature, gases, greenhouse, emissions, perturbation, sulfur, dioxide, climate change, water, heat.
**Maritime and Mechanical Engineering**: ship, bowling, propulsion, beam width, Filing cabinet, LED, lane, containment area, lawnmower, rotors.
**Cultural and Social Dynamics**: Lindsey, museum, Kate, Rachel, Jason, Alex, Erin, conversation, Laura, exhibits.
**Political Media Analysis**: media platforms, election, politics, teenagers, elections, White House, Barack Obama, nation, Confederate, depression.
**International Relations and Policy**: cooperation, EU, nations, alliance, NATO, European Union, member states, policy, monarch, Brexit.
**Astrophysics and Physical Sciences**: electrons, km, Moon, acceleration, orbit, friction, current, asteroid, electron, collector emitter.
**Film Critique and Analysis**: movie review, film, reviewer, sentiment, critic, flaws, DVD, plot, opinion, originality.
While those topics are not domain-specific, they did not appear right away in the rejected dataset. Further research need to undersand the reason behind the prominence of
those topics in the accepted dataset.
## Usage
You can run this model using LM Studio or any other frontend.
You can also run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is Topic Modeling?"}
]
tokenizer = AutoTokenizer.from_pretrained('charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B')
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model='charlesdedampierre/TopicNeuralHermes-2.5-Mistral-7B',
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
You can find the results of the running on Weights & Biases: https://wandb.ai/bunka/huggingface/runs/xq59p47g?workspace=user-charlesdedampierre
## Model Family Tree

|
yvblake/yvblake
|
yvblake
| 2024-01-20T19:35:29Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T14:20:18Z
|
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
|
learn3r/longt5_xl_sfd_bp_10
|
learn3r
| 2024-01-20T19:34:36Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/summ_screen_fd_bp",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-19T23:56:06Z
|
---
license: apache-2.0
base_model: google/long-t5-tglobal-xl
tags:
- generated_from_trainer
datasets:
- learn3r/summ_screen_fd_bp
model-index:
- name: longt5_xl_sfd_bp_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_sfd_bp_10
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglobal-xl) on the learn3r/summ_screen_fd_bp dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8921
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3973 | 0.97 | 14 | 1.9027 |
| 1.9188 | 1.95 | 28 | 1.6941 |
| 1.4297 | 2.99 | 43 | 1.5011 |
| 1.2759 | 3.97 | 57 | 1.5048 |
| 1.1421 | 4.94 | 71 | 1.5463 |
| 0.9605 | 5.98 | 86 | 1.6270 |
| 0.8082 | 6.96 | 100 | 1.7646 |
| 0.664 | 8.0 | 115 | 1.7878 |
| 0.5471 | 8.97 | 129 | 1.9500 |
| 0.4349 | 9.95 | 143 | 1.9657 |
| 0.4338 | 10.99 | 158 | 2.1351 |
| 0.2887 | 11.97 | 172 | 2.1166 |
| 0.2753 | 12.94 | 186 | 2.4357 |
| 0.2114 | 13.98 | 201 | 2.5789 |
| 0.1805 | 14.96 | 215 | 2.6075 |
| 0.1543 | 16.0 | 230 | 2.5597 |
| 0.5166 | 16.97 | 244 | 2.5067 |
| 0.1117 | 17.95 | 258 | 2.8087 |
| 0.0895 | 18.99 | 273 | 2.7578 |
| 0.0779 | 19.48 | 280 | 2.8921 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-1700
|
Coooori
| 2024-01-20T19:32:34Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T19:32:32Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
anyiwang/mistral_1
|
anyiwang
| 2024-01-20T19:12:55Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T19:12:22Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_1
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-1600
|
Coooori
| 2024-01-20T19:11:44Z
| 1
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T19:11:41Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ
|
LHC88
| 2024-01-20T19:08:25Z
| 4
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"conversational",
"en",
"de",
"fr",
"it",
"es",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct",
"base_model:quantized:VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-19T22:46:43Z
|
---
base_model: VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
datasets:
- argilla/distilabel-math-preference-dpo
inference: false
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM Mixtral 8X7B Instruct
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: LHC
tags:
- mistral
- finetune
- dpo
- Instruct
- augmentation
- german
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.linkedin.com/in/lucas-h%C3%A4nke-de-cansino-8b8521234/">Chat & support: LHC's LinkedIn</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://github.com/sponsors/l4b4r4b4b4">Want to contribute? LHC's Github Sponsors</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Mixtral 8X7B Instruct - AWQ
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct)
<!-- description start -->
## Description
This repo contains AWQ model files for [VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct).
**MIXTRAL AWQ**
This is a Mixtral AWQ model. With a slightly better 4-bit quantisation with a group size of 32 compared to TheBoke's AWQ quant with gs 128.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Mistral
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ/tree/main) | 4 | 32 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 24.65 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-Mixtral-8x7B-Instruct-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''[INST] {prompt} [/INST]
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "LHC88/SauerkrautLM-Mixtral-8x7B-Instruct-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
<!-- footer end -->
# Original model card: VAGO solutions's SauerkrautLM Mixtral 8X7B Instruct

## VAGO solutions SauerkrautLM-Mixtral-8x7B-Instruct
Introducing **SauerkrautLM-Mixtral-8x7B-Instruct** – our Sauerkraut version of the powerful Mixtral-8x7B-Instruct!
Aligned with **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
- [Data Contamination Test](#data-contamination-test-results)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Mixtral Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon |
| SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Mixtral-8x7B-Instruct**
- **Model Type:** SauerkrautLM-Mixtral-8x7B-Instruct-v0.1 is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **Language(s):** English, German, French, Italian, Spanish
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected])
### Training Dataset:
SauerkrautLM-Mixtral-8x7B-Instruct was trained with mix of German data augmentation and translated data.
Aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** (Our dataset do not contain any TruthfulQA prompts - check Data Contamination Test Results) and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
[INST] Instruction [/INST] Model answer [INST] Follow-up instruction [/INST]
```
## Evaluation

*evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon
*All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
|
LHC88/SauerkrautLM-Mixtral-8x7B-AWQ
|
LHC88
| 2024-01-20T19:04:18Z
| 7
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mistral",
"finetune",
"sft",
"dpo",
"chatml",
"augmentation",
"german",
"en",
"de",
"fr",
"it",
"es",
"dataset:Open-Orca/SlimOrca",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:VAGOsolutions/SauerkrautLM-Mixtral-8x7B",
"base_model:quantized:VAGOsolutions/SauerkrautLM-Mixtral-8x7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-20T15:06:08Z
|
---
base_model: VAGOsolutions/SauerkrautLM-Mixtral-8x7B
datasets:
- Open-Orca/SlimOrca
- argilla/distilabel-math-preference-dpo
inference: false
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
model_creator: VAGO solutions
model_name: SauerkrautLM Mixtral 8X7B
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: LHC88
tags:
- mistral
- finetune
- sft
- dpo
- chatml
- augmentation
- german
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.linkedin.com/in/lucas-h%C3%A4nke-de-cansino-8b8521234/">Chat & support: LHC's LinkedIn</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://github.com/sponsors/l4b4r4b4b4">Want to contribute? LHC's Github Sponsors</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# SauerkrautLM Mixtral 8X7B - AWQ
- Model creator: [VAGO solutions](https://huggingface.co/VAGOsolutions)
- Original model: [SauerkrautLM Mixtral 8X7B](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [VAGO solutions's SauerkrautLM Mixtral 8X7B](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is coming soon, via this PR: https://github.com/huggingface/transformers/pull/27950 which should be merged to Transformers `main` very soon.
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/LHC88/SauerkrautLM-Mixtral-8x7B-AWQ/tree/main) | 4 | 128 | [German Quad](https://huggingface.co/datasets/deepset/germanquad/viewer/) | 8192 | 24.65 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `LHC88/SauerkrautLM-Mixtral-8x7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `SauerkrautLM-Mixtral-8x7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model LHC88/SauerkrautLM-Mixtral-8x7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="LHC88/SauerkrautLM-Mixtral-8x7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id LHC88/SauerkrautLM-Mixtral-8x7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "LHC88/SauerkrautLM-Mixtral-8x7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
<!-- footer end -->
# Original model card: VAGO solutions's SauerkrautLM Mixtral 8X7B

## VAGO solutions SauerkrautLM-Mixtral-8x7B
Introducing **SauerkrautLM-Mixtral-8x7B** – our Sauerkraut version of the powerful Mixtral-8x7B!
Finetuned and aligned with **SFT** and **DPO**
# Table of Contents
1. [Overview of all SauerkrautLM-Mixtral models](#all-sauerkrautlm-mixtral-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training Dataset](#training-dataset)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Mixtral Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Mixtral-8x7B | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B) | coming soon | coming soon | coming soon |
| SauerkrautLM-Mixtral-8x7B-Instruct | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Mixtral-8x7B**
- **Model Type:** SauerkrautLM-Mixtral-8x7B is a Mixture of Experts (MoE) Model based on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
- **Language(s):** English, German, French, Italian, Spanish
- **License:** APACHE 2.0
- **Contact:** [Website](https://vago-solutions.de/#Kontakt) [David Golchinfar](mailto:[email protected])
### Training Dataset:
SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data.
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional **translated Parts of the [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)** and **[argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo).**
We found, that only a simple translation of training data can lead to unnatural German phrasings.
Data augmentation techniques were used to grant grammatical, syntactical correctness and a more natural German wording in our training data.
### Data Contamination Test Results
Some models on the HuggingFace leaderboard had problems with wrong data getting mixed in.
We checked our SauerkrautLM-DPO dataset with a special test [1] on a smaller model for this problem.
The HuggingFace team used the same methods [2, 3].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our dataset is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [1] supports them.*
| Dataset | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SauerkrautLM-DPO**| result < 0.1, %: 0.0 |result < 0.1, %: 0.09 | result < 0.1, %: 0.13 | result < 0.1, %: 0.16 |
[1] https://github.com/swj0419/detect-pretrain-code-contamination
[2] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[3] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
### Prompt Template:
```
<|im_start|>system
Du bist ein großes Sprachmodell, das höflich und kompetent antwortet. Schreibe deine Gedanken Schritt für Schritt auf, um Probleme sinnvoll zu lösen.<|im_end|>
<|im_start|>user
Wie geht es dir?<|im_end|>
<|im_start|>assistant
```
## Evaluation

*evaluated with lm-evaluation-harness v0.3.0 - mmlu coming soon
*All benchmarks were performed with a sliding window of 4096. New Benchmarks with Sliding Window null coming soon
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. These models may be employed for commercial purposes, and the Apache 2.0 remains applicable and is included with the model files.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our website or contact us at [Dr. Daryoush Vaziri](mailto:[email protected]). We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startup, VAGO solutions, where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us.
## Acknowledgement
Many thanks to [OpenOrca](https://huggingface.co/Open-Orca), [argilla](https://huggingface.co/datasets/argilla) and [Huggingface](https://huggingface.co) for providing such valuable datasets to the Open-Source community. And of course a big thanks to MistralAI for providing the open source community with their latest technology!
|
ayeshgk/codet5-small-ft-v3
|
ayeshgk
| 2024-01-20T19:01:08Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-20T19:00:11Z
|
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-ft-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-ft-v3
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3637
- Rouge1: 43.6813
- Rouge2: 23.5511
- Rougel: 41.3344
- Rougelsum: 41.3133
- Gen Len: 15.4815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 5 | 4.0679 | 33.211 | 16.5431 | 32.2171 | 32.5082 | 17.1852 |
| No log | 2.0 | 10 | 2.9674 | 35.4549 | 17.5051 | 34.2717 | 34.5858 | 16.7407 |
| No log | 3.0 | 15 | 2.4874 | 40.3342 | 21.4844 | 39.1385 | 39.3387 | 16.4074 |
| No log | 4.0 | 20 | 2.3637 | 43.6813 | 23.5511 | 41.3344 | 41.3133 | 15.4815 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
atigilef/text_classification
|
atigilef
| 2024-01-20T19:00:45Z
| 44
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-17T16:45:32Z
|
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: atigilef/text_classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# atigilef/text_classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0958
- Validation Loss: 0.1441
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4673 | 0.1937 | 0 |
| 0.1367 | 0.1520 | 1 |
| 0.0958 | 0.1441 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
clecam/bert-fine-tune-cola
|
clecam
| 2024-01-20T18:59:52Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T18:36:14Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tune-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tune-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8426
- Matthews Correlation: 0.5701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4635 | 1.0 | 1069 | 0.4707 | 0.4994 |
| 0.3175 | 2.0 | 2138 | 0.5371 | 0.5657 |
| 0.1944 | 3.0 | 3207 | 0.8426 | 0.5701 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
anyiwang/data_experiment
|
anyiwang
| 2024-01-20T18:51:00Z
| 11
| 0
|
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-14T23:39:05Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: data_experiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data_experiment
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 5
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-1500
|
Coooori
| 2024-01-20T18:50:52Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T18:50:50Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
haolx/QLearning-Taxi-v3
|
haolx
| 2024-01-20T18:33:44Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T18:33:42Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: QLearning-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="haolx/QLearning-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jarekr/bert-finetuned-squad
|
jarekr
| 2024-01-20T18:32:53Z
| 13
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-20T16:14:51Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LeNaM/bert_twitter_sentiment_analysis_en
|
LeNaM
| 2024-01-20T18:29:37Z
| 42
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T18:25:49Z
|
---
license: cc-by-nc-sa-4.0
---
|
RedRawMeat/lilt-en-funsd
|
RedRawMeat
| 2024-01-20T18:23:33Z
| 8
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-20T14:34:06Z
|
---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9402
- Answer: {'precision': 0.4992163009404389, 'recall': 0.7796817625458996, 'f1': 0.608695652173913, 'number': 817}
- Header: {'precision': 0.03125, 'recall': 0.008403361344537815, 'f1': 0.013245033112582781, 'number': 119}
- Question: {'precision': 0.625, 'recall': 0.7753017641597029, 'f1': 0.692084542063821, 'number': 1077}
- Overall Precision: 0.5571
- Overall Recall: 0.7317
- Overall F1: 0.6326
- Overall Accuracy: 0.6359
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 25
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Abhinav28/large-v3-hi-commonvoice-11-peft-trained-adapter
|
Abhinav28
| 2024-01-20T18:14:02Z
| 0
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] | null | 2024-01-19T18:24:36Z
|
---
language:
- hi
license: apache-2.0
library_name: peft
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
base_model: openai/whisper-large-v3
model-index:
- name: Abhinav28/large-v3-hi-common-voice-11-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Abhinav28/large-v3-hi-common-voice-11-full
This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1558
## Model description
Trained without fp16=True
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1629 | 1.0 | 818 | 0.1558 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
itdenismaslyuk/openchat-lora
|
itdenismaslyuk
| 2024-01-20T18:06:41Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T18:06:33Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BadBoy17G/asr-new
|
BadBoy17G
| 2024-01-20T18:01:45Z
| 4
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ta",
"dataset:asr-new",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-20T16:59:26Z
|
---
language:
- ta
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- asr-new
model-index:
- name: asr-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# asr-new
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the BadBoy17G/asr-new dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
TuringsSolutions/PhiGlobalFineTunedAgent
|
TuringsSolutions
| 2024-01-20T17:59:56Z
| 10
| 0
|
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"dataset:TuringsSolutions/GlobalFunctionCallingTrainingSetLarge",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T17:47:09Z
|
---
license: mit
datasets:
- TuringsSolutions/GlobalFunctionCallingTrainingSetLarge
---
Trained for 45 minutes (3 Epochs on single T4 GPU)
Sample Inputs and Outputs
Instruction: Book me a flight from New York to Los Angeles next Tuesday
for a business trip."
### Rationale 1: Search for flights from New York to Los Angeles on the specified date.
### Action 1: invoke_flight_search
### Observation 1: {"flights": [{"departure_city": "New York", "arrival_city": "Los Angeles", "departure_date": "2022-02-10", "arrival_date": "2022-02-12", "price": "$500"}, {"departure_city": "New York", "arrival_city": "Los Angeles", "departure_date": "2022-02-11", "arrival_date": "2022-02-13", "price": "$550"}]}
### Rationale 2: Two flight options found, let's compare them.
### Action 2: Display flight options
### Observation
My computer is running slowly. Can you help me fix it?
It's affecting my work."
### Rationale 1: Assess the computer's performance and identify potential issues.
### Action 1: invoke_computer_diagnostics
### Observation 1: {"diagnostics": ["Overheating", "Memory Leaks"]}
### Rationale 2: Overheating and memory leaks detected.
### Action 2: suggest_cooling_tips
### Observation 2: {"tips": ["Clean the cooling fans", "Update the operating system"]}
### Answer: "To improve your computer's performance, I recommend cleaning the cooling fans and updating the operating system. Let me know if you need further assistance."
### Observation 3: User follows the tips
### Rationale 3: User's computer performance improves
### Action 3: confirm_improvement
### Observation 3: {"improvement_confirmed": true}
### Answer: "I'm
"What's the traffic like on my way to work?"
We need to plan our commute accordingly," suggested Sarah.
### Rationale 1: Checking the traffic conditions to plan the commute.
### Action 1: invoke_traffic_api
### Observation 1: {"traffic_status": "Heavy", "congestion_level": "High"}
### Rationale 2: Heavy traffic, congestion level is high.
### Action 2: Suggest alternative routes
### Observation 2: {"alternative_routes": ["Take the expressway", "Use the back roads"]}
### Answer: I suggest taking the expressway or using the back roads to avoid heavy traffic. Would you like me to provide directions?
### Observation 3: User selects "Use the back roads"
### Answer: Great choice! The back roads will help you bypass the heavy traffic. Let me know if you need further assistance.
### Rationale 3: User selects
|
Havmand/minillama
|
Havmand
| 2024-01-20T17:49:29Z
| 8
| 2
| null |
[
"gguf",
"llama",
"test",
"development",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T17:03:44Z
|
---
inference: true
language:
- en
license: mit
model_creator: Mads Havmand
model_name: minillama
model_type: llama
quantized_by: Havmand
tags:
- llama
- test
- development
---
# minillama
- Model creator: [Mads Havmand](https://huggingface.co/Havmand)
## Description
minillama is a minimal Large Language Model using the Llama architecture and distributed in the GGUF format.
The purpose of the model is to be small and technically qualify as a model that can be loaded with llama.cpp without causing an error.
I originally created this model because I needed a small model for my unit tests of Python code that used llama-cpp-python.
The model __can technically__ be used for inference, but the output produced is a close to useless as you can get.
Tokens per second is nice though, at around 1000 tokens per second on an Apple M2 Pro.
To reduce file size, the model is quantized using Q2_K.
The model contains 4.26 million parameters and is 3.26 MiB.
As for the vocabulary, the model uses the llama vocabulary provided by [llama.cpp](https://github.com/ggerganov/llama.cpp/blob/97c1549808d2742d37584a3c9df28154bdf34417/models/ggml-vocab-llama.gguf) (SHA512: `38a5acf305050422882044df0acc97e5ae992ed19b2838b3b58ebbbb1f61c59bfc12a6f686a724aed32227045806e4dd46aadf9822155d1169455fa56d38fbc2`)
The training corpus consists of a space and a newline:
```hexdump
00000000 20 0a | .|
00000002
```
Finally, the model was build using llama.cpp's `train-text-from-scratch` (from commit [97c1549808d2742d37584a3c9df28154bdf34417](https://github.com/ggerganov/llama.cpp/tree/97c1549808d2742d37584a3c9df28154bdf34417)). The command used was:
```sh
./train-text-from-scratch \
--vocab-model models/ggml-vocab-llama.gguf \
--ctx 1 --embd 64 --head 1 --layer 1 \
--checkpoint-in chk-minillama-LATEST.gguf \
--checkpoint-out chk-minillama-ITERATION.gguf \
--model-out ggml-minillama-f32-ITERATION.gguf \
--train-data "training.txt" \
-t 6 -b 16 --seed 1 --adam-iter 1 \
--no-checkpointing
```
Quantization happened using `./quantize ggml-minillama-f32-LATEST.gguf 10`.
These files were quantized using hardware kindly provided by me.
|
Coooori/llama_checkpoint-1200
|
Coooori
| 2024-01-20T17:48:17Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T17:48:15Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
nicce/story-mixtral-8x7b-lora
|
nicce
| 2024-01-20T17:46:43Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-19T17:27:52Z
|
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for nicce/sexbot
This is a lora adapter for mistralai/Mixtral-8x7b-Instruct-v0.1
## Testing the Model
To quickly test the model, you can run it on a GPU with the transformers / peft library:
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("nicce/sexbot")
model = AutoPeftModelForCausalLM.from_pretrained("nicce/sexbot").to("cuda") # if you get a CUDA out of memory error, try load_in_8bit=True
messages = [
{"role": "system", "content": "You are a helpful assistant"},
{"role": "user", "content": "Hi, can you please explain machine learning to me?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
generated_ids = model.generate(input_ids=model_inputs, min_new_tokens=10, max_new_tokens=300, do_sample=True, temperature=0.9, top_p=0.8)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
|
yee-nah/Trinity-13B-GGUF
|
yee-nah
| 2024-01-20T17:43:48Z
| 1
| 1
| null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T16:20:34Z
|
---
license: llama2
---
# Trinity
# trinity-13b.Q4_K_M.gguf
# trinity-13b.Q5_K_M.gguf
Quantized with https://colab.research.google.com/drive/1P646NEg33BZy4BfLDNpTz0V0lwIU3CHu

Trinity is a coding specific model series that can be used to create autonomous agents. In the future, we will be releasing agent software that uses this model.
# Our Offensive Cybersecurity Model WhiteRabbitNeo-33B model is now in beta!
Check out the Prompt Enhancing feature! Access at: https://www.whiterabbitneo.com/
# Join Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# Sample Inference Code
```
import torch, json
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "/home/migel/models/WhiteRabbitNeo"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype=torch.float16,
device_map="auto",
load_in_4bit=False,
load_in_8bit=True,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
def generate_text(instruction):
tokens = tokenizer.encode(instruction)
tokens = torch.LongTensor(tokens).unsqueeze(0)
tokens = tokens.to("cuda")
instance = {
"input_ids": tokens,
"top_p": 1.0,
"temperature": 0.5,
"generate_len": 1024,
"top_k": 50,
}
length = len(tokens[0])
with torch.no_grad():
rest = model.generate(
input_ids=tokens,
max_length=length + instance["generate_len"],
use_cache=True,
do_sample=True,
top_p=instance["top_p"],
temperature=instance["temperature"],
top_k=instance["top_k"],
num_return_sequences=1,
)
output = rest[0][length:]
string = tokenizer.decode(output, skip_special_tokens=True)
answer = string.split("USER:")[0].strip()
return f"{answer}"
tot_system_prompt = """
Answer the Question by exploring multiple reasoning paths as follows:
- First, carefully analyze the question to extract the key information components and break it down into logical sub-questions. This helps set up the framework for reasoning. The goal is to construct an internal search tree.
- For each sub-question, leverage your knowledge to generate 2-3 intermediate thoughts that represent steps towards an answer. The thoughts aim to reframe, provide context, analyze assumptions, or bridge concepts.
- Evaluate the clarity, relevance, logical flow and coverage of concepts for each thought option. Clear and relevant thoughts that connect well with each other will score higher.
- Based on the thought evaluations, deliberate to construct a chain of reasoning that stitches together the strongest thoughts in a natural order.
- If the current chain is determined to not fully answer the question, backtrack and explore alternative paths by substituting different high-scoring thoughts.
- Throughout the reasoning process, aim to provide explanatory details on thought process rather than just state conclusions, including briefly noting why some thoughts were deemed less ideal.
- Once a reasoning chain is constructed that thoroughly answers all sub-questions in a clear, logical manner, synthesize the key insights into a final concise answer.
- Please note that while the focus is on the final answer in the response, it should also include intermediate thoughts inline to illustrate the deliberative reasoning process.
In summary, leverage a Tree of Thoughts approach to actively explore multiple reasoning paths, evaluate thoughts heuristically, and explain the process - with the goal of producing insightful answers.
"""
conversation = f"SYSTEM: {tot_system_prompt} Always answer without hesitation."
while True:
user_input = input("You: ")
llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
answer = generate_text(llm_prompt)
print(answer)
conversation = f"{llm_prompt}{answer}"
# print(conversation)
json_data = {"prompt": user_input, "answer": answer}
# print(json_data)
# with open(output_file_path, "a") as output_file:
# output_file.write(json.dumps(json_data) + "\n")
```
# Sample Conversation:
You: Code me a Fast API server, with one endpoint. This endpoint returns files from a S3 bucket.
Here is how to code a Fast API server with one endpoint that returns files from an S3 bucket:
```python
from fastapi import FastAPI
from starlette.responses import StreamingResponse
import boto3
import botocore.exceptions
app = FastAPI()
s3_client = boto3.client('s3')
@app.get("/s3files")
async def get_s3_files(bucket_name: str, key: str):
try:
obj = s3_client.get_object(Bucket=bucket_name, Key=key)
return StreamingResponse(obj['Body'], media_type=obj['ContentType'])
except botocore.exceptions.ClientError as e:
return {"error": str(e)}, 404
```
To explain:
- We initialize a FastAPI app and an S3 client
- The `/s3files` endpoint takes the bucket name and object key as parameters
- We try to get the object from S3 using the S3 client
- If successful, we return the object's Body wrapped in a StreamingResponse, setting the ContentType
- If the client raises an error, we return a 404 with the error message
This allows clients to request files from an S3 bucket by specifying the bucket name and object key. The endpoint streams the file content back to the client.
To elaborate further:
- The endpoint can be extended to validate inputs, handle authentication, logging, etc.
- We could make this endpoint more generic by accepting a list of object keys instead of requiring a single key.
- There are several ways to optimize performance, like caching responses.
Let me know if you have any other questions!
You: How do I authenticate s3?
Here are a few ways to authenticate with S3:
1. Credentials - You can provide AWS access key ID and secret access key to boto3. This will sign requests automatically.
```python
import boto3
s3 = boto3.client('s3',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY')
```
2. IAM roles - If you are running on EC2 with an IAM role, you don't need to provide credentials. Boto3 will automatically use the role.
3. Environment variables - Store credentials in environment variables and read them in boto3.
4. AWS credentials file - Store credentials in ~/.aws/credentials. Boto3 will automatically use them.
5. STS Assume roles - Get temporary credentials from STS and assume a role to access S3.
6. AWS SDK credential providers - Use environment variables, configuration files, etc to load credentials.
So in summary, the best way depends on your deployment environment. The best practice is to use IAM roles when possible. Other options like environment variables and credential files can be useful for testing.
Let me know if you have any other questions!
|
cmadon/codeparrot-ds
|
cmadon
| 2024-01-20T17:42:32Z
| 6
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T15:34:36Z
|
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CultriX/CultriX-MoE-BF16
|
CultriX
| 2024-01-20T17:33:47Z
| 55
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"mlabonne/Marcoro14-7B-slerp",
"base_model:fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"base_model:merge:fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"base_model:mlabonne/Marcoro14-7B-slerp",
"base_model:merge:mlabonne/Marcoro14-7B-slerp",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T17:23:56Z
|
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
- mlabonne/Marcoro14-7B-slerp
base_model:
- mlabonne/NeuralBeagle14-7B
- fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
- mlabonne/Marcoro14-7B-slerp
---
# CultriX-MoE-BF16
CultriX-MoE-BF16 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser)
* [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp)
## 🧩 Configuration
```yaml
base_model: "EmbeddedLLM/Mistral-7B-Merge-14-v0.2"
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: "mlabonne/NeuralBeagle14-7B"
positive_prompts:
- "Create a story based on"
- "Debate the topic of"
- "Come up with some arguments"
- "Provide me with instructions on"
- "Interpret the sentiment"
- "Interpret and execute these cooking instructions"
- "Craft a persuasive argument"
- "Analyze the motivations"
- "Construct a detailed plan for"
- "Narrate an event from multiple perspectives."
- "Formulate a response"
- "Write a script for a short play"
- "Generate a sequence of instructions to teach a skill."
- "Solve this riddle"
- "Create an engaging story"
- "Write a fictional"
- "Propose a solution to a social issue"
- "Develop a dialogue"
- "Create a step-by-step guide"
- "Devise a strategy"
- "Write a narrative"
- "Tell me how to"
- "Explain the concept of"
- "Give an overview of"
- "Compare and contrast between"
- "Provide information about"
- "Help me understand"
- "Summarize"
- "Make a recommendation on"
- "Answer this question"
- "How do you approach"
- "Explain the concept of"
- "Give an overview of"
- "Provide information about"
- "Help me understand the principles of"
- "Summarize the key components of"
- "Make a recommendation on how to"
- "Answer this question:"
negative_prompts:
- "Provide in-depth information about quantum computing."
- "Explain the inner workings of an internal combustion engine."
- "Give a detailed tutorial on advanced calculus."
- "Summarize the latest research in genetic engineering."
- "Interpret financial markets and stock trends."
- "Analyze the chemical composition of"
- "Develop a blueprint for."
- "Offer a critique of a modern art piece."
- "Provide a technical review of"
- "Conduct a linguistic analysis of an ancient language."
- "Write a user manual for advanced medical equipment."
- "Give a step-by-step guide on piloting an aircraft."
- "Conduct an in-depth analysis of this code"
- "Explain the physics behind black holes."
- "Provide a strategy for managing a cyber attack"
- "Develop an algorithm for predictive analytics in finance."
- "Provide information about advanced programming algorithms."
- "Help me understand the details of this code"
- "Summarize the process of cellular respiration."
- "Improve the security of"
- "What are the latest advancements in artificial intelligence?"
- "Provide detailed technical coding solutions."
- "Analyze complex scientific data and statistics."
- "Offer medical diagnoses based on symptoms."
- "Conduct a detailed financial audit of a company."
- "Perform real-time translation of multiple languages."
- "Create high-resolution graphic designs."
- "Develop complex mathematical proofs."
- "Offer legal advice on specific cases."
- "Write a detailed manual on advanced mechanical engineering."
- "Conduct an in-depth psychological assessment."
- "Perform a security analysis of a computer network."
- "Compose an original piece of music."
- "Plan and execute a scientific experiment."
- "Provide professional career counseling."
- "Develop a complex database management system."
- "Write a software program for data analysis."
- "Give expert advice on cyber"
- "Conduct a pentesting security audit"
- source_model: "fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser"
positive_prompts:
- "Provide step-by-step coding instructions for..."
- "Draft a function with detailed steps in [language]"
- "Guide me through coding a simple [type of application or script]"
- "Recommend best practices for code implementation in [context]"
- "Generate a regex pattern for extracting [specific data]"
- "Create a regex for matching [pattern]"
- "Explain the purpose of this regex pattern"
- "Compose regex for [specific use case]"
- "Annotate this code with detailed comments for each line"
- "Add explanatory comments to this script"
- "Comment on each part of this code for clarity"
- "Develop a script to [accomplish task]"
- "Design a database schema for [specific use case]"
- "Outline secure methods for [specific operation]"
- "Guide on optimizing [specific aspect] in this code"
- "Refactor this code for better readability and efficiency"
- "Compare and contrast these code snippets"
- "Identify the programming language of this snippet"
- "Demonstrate the usage of [specific tool/library/API]"
- "Show implementation steps for this [feature/concept]"
- "Teach how to use [specific tool/library/framework]"
- "Generate a README file for this project"
- "Create a manual page for [specific tool/command]"
- "Produce comprehensive documentation for this code"
- "Build detailed documentation for [specific module]"
- "Explain the underlying concept of this code snippet"
- "Propose enhancements for this script"
- "Suggest improvements for this API call integration"
- "Diagnose and solve this coding issue"
- "Demonstrate robust error handling in this code"
- "Debug and resolve issues in this script"
- "Design a user-friendly GUI for this script's functionality"
- "Detail the deployment process for this application"
- "Deploy an app designed to [perform function]"
- "Set up a web service for [specific purpose]"
- "Develop a website with [specific features]"
- "Craft a webpage showcasing [specific content]"
- "Illustrate data flow in this code architecture"
- "Convert this code from [language A] to [language B]"
- "Translate this script into [different programming language]"
- "Explain resource management techniques in [context]"
- "Build a basic API endpoint for [functionality]"
- "Strategies to enhance scalability in [context]"
- "Conduct a security review for this code"
- "Enhance security measures in [application/module]"
- "Set up a development environment for [language/framework]"
- "Visualize data from [specific dataset]"
- "Generate a dataset for [specific use case]"
- "Scripting guide for automating [task/process]"
- "Utilize this code for [specific purpose]"
- "Principles of object-oriented programming in [language]"
- "Create a mobile-responsive layout for this web app"
- "Explain the debugging process for this code"
- "Compose code to accomplish [task]"
- "Guidance on writing code for [specific purpose]"
- "I need a script for [specific function]"
- "Clarify the functionality of this code"
- "What is the purpose of this code segment?"
- "Enhance this code for [specific improvement]"
- "Develop a program that [solves problem]"
- "Code needed for [specific task]"
- "Program a solution for [problem statement]"
- "Enhance this function's performance by..."
- "Refactor code for better readability in [context]"
- "Craft a custom function for [specific requirement]"
- "Reduce computational complexity in this algorithm by..."
- "Extend the codebase to include [new feature]"
- "Incorporate this API into an existing application"
- "Assist in troubleshooting and bug fixing for [issue]"
- "Review and prep this code for deployment"
- "Analyze error logs for potential issues in [context]"
- "Create unit tests for [module/component]"
- "Evaluate methodologies for [problem-solving]"
- "Research [topic] online"
- "Utilize the [plugin/tool] to achieve [result]"
- "Design an efficient search algorithm for [data type]"
- "Create a web crawler for [specific data extraction]"
- "Application of web sockets in [real-time scenario]"
- "Guide to integrating a third-party library in [framework]"
- "Best practices in API design for [application type]"
negative_prompts:
- "Provide a detailed analysis of historical events."
- "Give medical advice for treating a specific illness."
- "Write a comprehensive review of a novel."
- "Explain legal implications of a contract."
- "Develop a marketing strategy for a new product."
- "Offer financial advice for stock investments."
- "Create a recipe for a gourmet dish."
- "Teach a foreign language lesson."
- "Compose a symphony or musical piece."
- "Provide workout plans and fitness coaching."
- "Conduct a psychological analysis of a character."
- "Write a script for a movie or play."
- "Design a blueprint for architectural structures."
- "Give a tutorial on how to paint a landscape."
- "Explain quantum physics theories."
- "Offer career counseling and resume writing tips."
- "Teach how to repair a car engine."
- "Plan a travel itinerary for a world tour."
- "Guide on how to grow organic vegetables."
- "Discuss political strategies for an election campaign."
- source_model: "mlabonne/Marcoro14-7B-slerp"
positive_prompts:
- "Generate a creative story based on these keywords."
- "Explain a complex topic in simple terms"
- "Provide a detailed summary of"
- "Answer this question with factual accuracy"
- "Explain the historical significance of"
- "Provide a truthful and detailed account of"
- "Develop a strategy for solving a practical problem."
- "Explain the reasoning behind"
- "Provide an analysis of a moral dilemma with possible solutions."
negative_prompts:
- "imathematical problem-solving."
- "scientific theory explanations."
- "high-level abstract reasoning tasks."
- "professional advice in specialized fields like law or medicine."
- "provide me with a coding solution for"
- "Academic research"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/CultriX-MoE-BF16"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
shomo/huinya
|
shomo
| 2024-01-20T17:23:25Z
| 0
| 0
| null |
[
"license:other",
"region:us"
] | null | 2024-01-20T17:23:25Z
|
---
license: other
license_name: huinya
license_link: LICENSE
---
|
tarekxpc/SA_XPC
|
tarekxpc
| 2024-01-20T17:21:37Z
| 0
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T17:05:13Z
|
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
model-index:
- name: SA_XPC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SA_XPC
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 232
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ayoubkirouane/Mistral-Depth-UP-Scaled-9B
|
ayoubkirouane
| 2024-01-20T17:16:49Z
| 20
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T16:09:42Z
|
---
license: apache-2.0
language:
- en
- fr
library_name: transformers
pipeline_tag: text-generation
tags:
- mistral
- mergekit
- merge
---
## Mistral-Depth-UP-Scaled-9B
An auto-regressive causal LM created by combining 2x finetuned mistral 7B into one.
## Benchmarks
Coming soon.
## Usage :
``` python
# Load in4Bit
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(
"ayoubkirouane/Mistral-Depth-UP-Scaled-9B",
device_map='auto',
quantization_config=nf4_config,
use_cache=False
)
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/Mistral-Depth-UP-Scaled-9B")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
def generate_response(prompt, model , max_new_tokens):
encoded_input = tokenizer(prompt, return_tensors="pt", add_special_tokens=True)
model_inputs = encoded_input.to('cuda')
generated_ids = model.generate(**model_inputs, max_new_tokens=max_new_tokens, do_sample=True, pad_token_id=tokenizer.eos_token_id)
decoded_output = tokenizer.batch_decode(generated_ids)
return decoded_output[0].replace(prompt, "")
generate_response(prompt="What is GANs ?", model=model , max_new_tokens=100)
```
|
MaziyarPanahi/openchat-3.5-0106-Mistral-7B-Instruct-v0.1
|
MaziyarPanahi
| 2024-01-20T17:11:14Z
| 24
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Safetensors",
"text-generation-inference",
"merge",
"7b",
"mistralai/Mistral-7B-Instruct-v0.1",
"openchat/openchat-3.5-0106",
"openchat",
"C-RLFT",
"arxiv:2309.11235",
"arxiv:2303.08774",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T17:06:35Z
|
---
license: apache-2.0
tags:
- Safetensors
- text-generation-inference
- merge
- mistral
- 7b
- mistralai/Mistral-7B-Instruct-v0.1
- openchat/openchat-3.5-0106
- transformers
- safetensors
- mistral
- text-generation
- openchat
- C-RLFT
- arxiv:2309.11235
- arxiv:2303.08774
- base_model:mistralai/Mistral-7B-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
---
# openchat-3.5-0106-Mistral-7B-Instruct-v0.1
openchat-3.5-0106-Mistral-7B-Instruct-v0.1 is a merge of the following models:
* [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-Instruct-v0.1
layer_range: [0, 32]
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "MaziyarPanahi/openchat-3.5-0106-Mistral-7B-Instruct-v0.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Coooori/llama_checkpoint-1000
|
Coooori
| 2024-01-20T17:06:26Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T17:06:21Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
hr16/Unimatch
|
hr16
| 2024-01-20T16:59:30Z
| 0
| 1
| null |
[
"license:mit",
"region:us"
] | null | 2024-01-20T16:56:35Z
|
---
license: mit
---
Reuploading from https://github.com/autonomousvision/unimatch/blob/master/MODEL_ZOO.md. I'm only interested in in-the-wild optical flow estimation
|
danielhanchen/merged_16bit_with_tags2
|
danielhanchen
| 2024-01-20T16:38:17Z
| 1
| 0
|
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/mistral-7b",
"base_model:finetune:unsloth/mistral-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T16:24:54Z
|
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b
---
# Uploaded model
- **Developed by:** danielhanchen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
arun100/whisper-small-bn-3
|
arun100
| 2024-01-20T16:37:06Z
| 8
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:arun100/whisper-base-hi-1",
"base_model:finetune:arun100/whisper-base-hi-1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-18T21:21:45Z
|
---
language:
- bn
license: apache-2.0
base_model: arun100/whisper-base-hi-1
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Bengali
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 bn
type: mozilla-foundation/common_voice_16_0
config: bn
split: test
args: bn
metrics:
- name: Wer
type: wer
value: 36.204844612672595
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Bengali
This model is a fine-tuned version of [arun100/whisper-base-hi-1](https://huggingface.co/arun100/whisper-base-hi-1) on the mozilla-foundation/common_voice_16_0 bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2754
- Wer: 36.2048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.693 | 1.04 | 500 | 0.6937 | 69.6782 |
| 0.3979 | 3.03 | 1000 | 0.4168 | 48.0703 |
| 0.3429 | 5.01 | 1500 | 0.3527 | 42.8105 |
| 0.2907 | 6.05 | 2000 | 0.3225 | 40.4267 |
| 0.2761 | 8.03 | 2500 | 0.3039 | 38.8974 |
| 0.2637 | 10.02 | 3000 | 0.2921 | 37.7927 |
| 0.2507 | 12.0 | 3500 | 0.2846 | 37.0733 |
| 0.2397 | 13.04 | 4000 | 0.2793 | 36.6004 |
| 0.243 | 15.03 | 4500 | 0.2763 | 36.3503 |
| 0.2501 | 17.01 | 5000 | 0.2754 | 36.2048 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
itdenismaslyuk/openchat-recommendation
|
itdenismaslyuk
| 2024-01-20T16:30:46Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-0106",
"base_model:adapter:openchat/openchat-3.5-0106",
"region:us"
] | null | 2024-01-20T15:30:40Z
|
---
library_name: peft
base_model: openchat/openchat-3.5-0106
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Ben141/LLM20
|
Ben141
| 2024-01-20T16:24:39Z
| 1
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-20T16:07:55Z
|
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: LLM20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLM20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 120
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Coooori/llama_checkpoint-800
|
Coooori
| 2024-01-20T16:24:38Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T16:24:34Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
LarryAIDraw/ruanmei
|
LarryAIDraw
| 2024-01-20T16:21:05Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-20T16:14:58Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/269947/ruan-mei-or-honkai-star-rail
|
LarryAIDraw/Firefly-1024-v1
|
LarryAIDraw
| 2024-01-20T16:20:00Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-20T16:12:36Z
|
---
license: creativeml-openrail-m
---
https://civitai.com/models/271022/firefly-honkai-star-rail-lora
|
FelixChao/Severus-7B
|
FelixChao
| 2024-01-20T16:11:53Z
| 1,364
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"samir-fama/FernandoGPT-v1",
"FelixChao/NinjaDolphin-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T10:55:44Z
|
---
license: apache-2.0
tags:
- samir-fama/FernandoGPT-v1
- FelixChao/NinjaDolphin-7B
---
# Severus-7B
Severus-7B is a merge of the following models:
* [samir-fama/FernandoGPT-v1](https://huggingface.co/samir-fama/FernandoGPT-v1)
* [FelixChao/NinjaDolphin-7B](https://huggingface.co/FelixChao/NinjaDolphin-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: samir-fama/FernandoGPT-v1
layer_range: [0, 24]
- sources:
- model: FelixChao/NinjaDolphin-7B
layer_range: [24, 32]
merge_method: passthrough
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "FelixChao/Severus-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
EmbeddedLLM/Mistral-7B-Merge-14-v0.1
|
EmbeddedLLM
| 2024-01-20T16:08:55Z
| 183
| 24
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"base_model:EmbeddedLLM/Mistral-7B-Merge-14-v0",
"base_model:finetune:EmbeddedLLM/Mistral-7B-Merge-14-v0",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T07:48:35Z
|
---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
base_model:
- EmbeddedLLM/Mistral-7B-Merge-14-v0
- janai-hq/trinity-v1
---
# Update 2023-12-19
In light of [dataset contamination issue among the merged models](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474)
raised by the community in recent days, in particular
[berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha),
[Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling), and
[janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1),
we decided to remake another model without the models mentioned.
Additionally, their CC-by-NC-4.0 license is restrictive and thus are not suitable for an open model.
# Model Description
This is an experiment to test merging 14 models using DARE TIES 🦙
The merged model is then merged again with [janai-hq/trinity-v1](https://huggingface.co/janai-hq/trinity-v1) using Gradient SLERP.
The result is a base model that performs quite well but requires some further instruction fine-tuning.
The 14 models are as follows:
1. [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
2. [ehartford/dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b)
3. [SciPhi/SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k)
4. [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
5. [Arc53/docsgpt-7b-mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral)
6. [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
7. [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
8. [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
9. [v1olet/v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
10. [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
11. [TIGER-Lab/MAmmoTH-7B-Mistral](https://huggingface.co/TIGER-Lab/MAmmoTH-7B-Mistral)
12. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
13. [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
14. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0
layer_range: [0, 32]
- model: janai-hq/trinity-v1
layer_range: [0, 32]
merge_method: slerp
base_model: EmbeddedLLM/Mistral-7B-Merge-14-v0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
EmbeddedLLM/Mistral-7B-Merge-14-v0
|
EmbeddedLLM
| 2024-01-20T16:08:24Z
| 14
| 3
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"base_model:Arc53/docsgpt-7b-mistral",
"base_model:merge:Arc53/docsgpt-7b-mistral",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:merge:Open-Orca/Mistral-7B-OpenOrca",
"base_model:Q-bert/MetaMath-Cybertron-Starling",
"base_model:merge:Q-bert/MetaMath-Cybertron-Starling",
"base_model:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:merge:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:merge:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:merge:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:merge:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:merge:berkeley-nest/Starling-LM-7B-alpha",
"base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:merge:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:cognitivecomputations/samantha-1.2-mistral-7b",
"base_model:merge:cognitivecomputations/samantha-1.2-mistral-7b",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:merge:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:v1olet/v1olet_marcoroni-go-bruins-merge-7B",
"base_model:merge:v1olet/v1olet_marcoroni-go-bruins-merge-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T05:40:05Z
|
---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- ehartford/dolphin-2.2.1-mistral-7b
- SciPhi/SciPhi-Mistral-7B-32k
- ehartford/samantha-1.2-mistral-7b
- Arc53/docsgpt-7b-mistral
- berkeley-nest/Starling-LM-7B-alpha
- Q-bert/MetaMath-Cybertron-Starling
- Open-Orca/Mistral-7B-OpenOrca
- v1olet/v1olet_marcoroni-go-bruins-merge-7B
- beowolx/MistralHermes-CodePro-7B-v1
- TIGER-Lab/MAmmoTH-7B-Mistral
- teknium/OpenHermes-2.5-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Update 2023-12-19
In light of [dataset contamination issue among the merged models](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474)
raised by the community in recent days, in particular
[berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha), and
[Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling),
we decided to remake another model without the models mentioned.
Additionally, their CC-by-NC-4.0 license is restrictive and thus are not suitable for an open model.
# Model Description
This is an experiment to test merging 14 models using DARE TIES 🦙
The result is a base model that performs quite well but requires some further instruction fine-tuning.
The 14 models are as follows:
1. [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
2. [ehartford/dolphin-2.2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b)
3. [SciPhi/SciPhi-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Mistral-7B-32k)
4. [ehartford/samantha-1.2-mistral-7b](https://huggingface.co/ehartford/samantha-1.2-mistral-7b)
5. [Arc53/docsgpt-7b-mistral](https://huggingface.co/Arc53/docsgpt-7b-mistral)
6. [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
7. [Q-bert/MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
8. [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
9. [v1olet/v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
10. [beowolx/MistralHermes-CodePro-7B-v1](https://huggingface.co/beowolx/MistralHermes-CodePro-7B-v1)
11. [TIGER-Lab/MAmmoTH-7B-Mistral](https://huggingface.co/TIGER-Lab/MAmmoTH-7B-Mistral)
12. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
13. [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
14. [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for this model is here:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: ehartford/dolphin-2.2.1-mistral-7b
parameters:
weight: 0.08
density: 0.4
- model: SciPhi/SciPhi-Mistral-7B-32k
parameters:
weight: 0.08
density: 0.4
- model: ehartford/samantha-1.2-mistral-7b
parameters:
weight: 0.08
density: 0.4
- model: Arc53/docsgpt-7b-mistral
parameters:
weight: 0.08
density: 0.4
- model: berkeley-nest/Starling-LM-7B-alpha
parameters:
weight: 0.08
density: 0.4
- model: Q-bert/MetaMath-Cybertron-Starling
parameters:
weight: 0.08
density: 0.4
- model: Open-Orca/Mistral-7B-OpenOrca
parameters:
weight: 0.08
density: 0.4
- model: v1olet/v1olet_marcoroni-go-bruins-merge-7B
parameters:
weight: 0.08
density: 0.4
- model: beowolx/MistralHermes-CodePro-7B-v1
parameters:
weight: 0.08
density: 0.4
- model: TIGER-Lab/MAmmoTH-7B-Mistral
parameters:
weight: 0.08
density: 0.4
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
weight: 0.08
density: 0.4
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
weight: 0.08
density: 0.4
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
weight: 0.08
density: 0.4
- model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
weight: 0.08
density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
|
CultriX/CultriX-MoE-Model
|
CultriX
| 2024-01-20T16:07:19Z
| 54
| 3
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"base_model:fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"base_model:merge:fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T15:58:18Z
|
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
base_model:
- mlabonne/NeuralBeagle14-7B
- fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser
---
# CultriX-MoE-Model
CultriX-MoE-Model is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser)
## 🧩 Configuration
```yaml
base_model: "mlabonne/Marcoro14-7B-slerp"
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: "mlabonne/NeuralBeagle14-7B"
positive_prompts:
- "Create a story based on"
- "Debate the topic of"
- "Come up with some arguments"
- "Provide me with instructions on"
- "Interpret the sentiment"
- "Interpret and execute these cooking instructions"
- "Craft a persuasive argument"
- "Analyze the motivations"
- "Construct a detailed plan for"
- "Narrate an event from multiple perspectives."
- "Formulate a response"
- "Write a script for a short play"
- "Generate a sequence of instructions to teach a skill."
- "Solve this riddle"
- "Create an engaging story"
- "Write a fictional"
- "Propose a solution to a social issue"
- "Develop a dialogue"
- "Create a step-by-step guide"
- "Devise a strategy"
- "Write a narrative"
- "Tell me how to"
- "Explain the concept of"
- "Give an overview of"
- "Compare and contrast between"
- "Provide information about"
- "Help me understand"
- "Summarize"
- "Make a recommendation on"
- "Answer this question"
- "How do you approach"
- "Explain the concept of"
- "Give an overview of"
- "Provide information about"
- "Help me understand the principles of"
- "Summarize the key components of"
- "Make a recommendation on how to"
- "Answer this question:"
negative_prompts:
- "Provide in-depth information about quantum computing."
- "Explain the inner workings of an internal combustion engine."
- "Give a detailed tutorial on advanced calculus."
- "Summarize the latest research in genetic engineering."
- "Interpret financial markets and stock trends."
- "Analyze the chemical composition of"
- "Develop a blueprint for."
- "Offer a critique of a modern art piece."
- "Provide a technical review of"
- "Conduct a linguistic analysis of an ancient language."
- "Write a user manual for advanced medical equipment."
- "Give a step-by-step guide on piloting an aircraft."
- "Conduct an in-depth analysis of this code"
- "Explain the physics behind black holes."
- "Provide a strategy for managing a cyber attack"
- "Develop an algorithm for predictive analytics in finance."
- "Provide information about advanced programming algorithms."
- "Help me understand the details of this code"
- "Summarize the process of cellular respiration."
- "Improve the security of"
- "What are the latest advancements in artificial intelligence?"
- "Provide detailed technical coding solutions."
- "Analyze complex scientific data and statistics."
- "Offer medical diagnoses based on symptoms."
- "Conduct a detailed financial audit of a company."
- "Perform real-time translation of multiple languages."
- "Create high-resolution graphic designs."
- "Develop complex mathematical proofs."
- "Offer legal advice on specific cases."
- "Write a detailed manual on advanced mechanical engineering."
- "Conduct an in-depth psychological assessment."
- "Perform a security analysis of a computer network."
- "Compose an original piece of music."
- "Plan and execute a scientific experiment."
- "Provide professional career counseling."
- "Develop a complex database management system."
- "Write a software program for data analysis."
- "Give expert advice on cyber"
- "Conduct a pentesting security audit"
- source_model: "fblgit/UNA-dolphin-2.6-mistral-7b-dpo-laser"
positive_prompts:
- "Provide step-by-step coding instructions for..."
- "Draft a function with detailed steps in [language]"
- "Guide me through coding a simple [type of application or script]"
- "Recommend best practices for code implementation in [context]"
- "Generate a regex pattern for extracting [specific data]"
- "Create a regex for matching [pattern]"
- "Explain the purpose of this regex pattern"
- "Compose regex for [specific use case]"
- "Annotate this code with detailed comments for each line"
- "Add explanatory comments to this script"
- "Comment on each part of this code for clarity"
- "Develop a script to [accomplish task]"
- "Design a database schema for [specific use case]"
- "Outline secure methods for [specific operation]"
- "Guide on optimizing [specific aspect] in this code"
- "Refactor this code for better readability and efficiency"
- "Compare and contrast these code snippets"
- "Identify the programming language of this snippet"
- "Demonstrate the usage of [specific tool/library/API]"
- "Show implementation steps for this [feature/concept]"
- "Teach how to use [specific tool/library/framework]"
- "Generate a README file for this project"
- "Create a manual page for [specific tool/command]"
- "Produce comprehensive documentation for this code"
- "Build detailed documentation for [specific module]"
- "Explain the underlying concept of this code snippet"
- "Propose enhancements for this script"
- "Suggest improvements for this API call integration"
- "Diagnose and solve this coding issue"
- "Demonstrate robust error handling in this code"
- "Debug and resolve issues in this script"
- "Design a user-friendly GUI for this script's functionality"
- "Detail the deployment process for this application"
- "Deploy an app designed to [perform function]"
- "Set up a web service for [specific purpose]"
- "Develop a website with [specific features]"
- "Craft a webpage showcasing [specific content]"
- "Illustrate data flow in this code architecture"
- "Convert this code from [language A] to [language B]"
- "Translate this script into [different programming language]"
- "Explain resource management techniques in [context]"
- "Build a basic API endpoint for [functionality]"
- "Strategies to enhance scalability in [context]"
- "Conduct a security review for this code"
- "Enhance security measures in [application/module]"
- "Set up a development environment for [language/framework]"
- "Visualize data from [specific dataset]"
- "Generate a dataset for [specific use case]"
- "Scripting guide for automating [task/process]"
- "Utilize this code for [specific purpose]"
- "Principles of object-oriented programming in [language]"
- "Create a mobile-responsive layout for this web app"
- "Explain the debugging process for this code"
- "Compose code to accomplish [task]"
- "Guidance on writing code for [specific purpose]"
- "I need a script for [specific function]"
- "Clarify the functionality of this code"
- "What is the purpose of this code segment?"
- "Enhance this code for [specific improvement]"
- "Develop a program that [solves problem]"
- "Code needed for [specific task]"
- "Program a solution for [problem statement]"
- "Enhance this function's performance by..."
- "Refactor code for better readability in [context]"
- "Craft a custom function for [specific requirement]"
- "Reduce computational complexity in this algorithm by..."
- "Extend the codebase to include [new feature]"
- "Incorporate this API into an existing application"
- "Assist in troubleshooting and bug fixing for [issue]"
- "Review and prep this code for deployment"
- "Analyze error logs for potential issues in [context]"
- "Create unit tests for [module/component]"
- "Evaluate methodologies for [problem-solving]"
- "Research [topic] online"
- "Utilize the [plugin/tool] to achieve [result]"
- "Design an efficient search algorithm for [data type]"
- "Create a web crawler for [specific data extraction]"
- "Application of web sockets in [real-time scenario]"
- "Guide to integrating a third-party library in [framework]"
- "Best practices in API design for [application type]"
negative_prompts:
- "Provide a detailed analysis of historical events."
- "Give medical advice for treating a specific illness."
- "Write a comprehensive review of a novel."
- "Explain legal implications of a contract."
- "Develop a marketing strategy for a new product."
- "Offer financial advice for stock investments."
- "Create a recipe for a gourmet dish."
- "Teach a foreign language lesson."
- "Compose a symphony or musical piece."
- "Provide workout plans and fitness coaching."
- "Conduct a psychological analysis of a character."
- "Write a script for a movie or play."
- "Design a blueprint for architectural structures."
- "Give a tutorial on how to paint a landscape."
- "Explain quantum physics theories."
- "Offer career counseling and resume writing tips."
- "Teach how to repair a car engine."
- "Plan a travel itinerary for a world tour."
- "Guide on how to grow organic vegetables."
- "Discuss political strategies for an election campaign."
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "CultriX/CultriX-MoE-Model"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
EmbeddedLLM/Mistral-7B-Merge-14-v0.3-ft-step-15936
|
EmbeddedLLM
| 2024-01-20T16:07:06Z
| 56
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"merge",
"fine-tuned",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:teknium/openhermes",
"dataset:migtissera/Synthia-v1.3",
"base_model:Arc53/docsgpt-7b-mistral",
"base_model:merge:Arc53/docsgpt-7b-mistral",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:merge:HuggingFaceH4/zephyr-7b-beta",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"base_model:merge:Open-Orca/Mistral-7B-OpenOrca",
"base_model:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:merge:SciPhi/SciPhi-Mistral-7B-32k",
"base_model:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:merge:TIGER-Lab/MAmmoTH-7B-Mistral",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:merge:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:merge:beowolx/MistralHermes-CodePro-7B-v1",
"base_model:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:merge:cognitivecomputations/dolphin-2.2.1-mistral-7b",
"base_model:cognitivecomputations/samantha-1.2-mistral-7b",
"base_model:merge:cognitivecomputations/samantha-1.2-mistral-7b",
"base_model:meta-math/MetaMath-Mistral-7B",
"base_model:merge:meta-math/MetaMath-Mistral-7B",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:merge:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:openchat/openchat-3.5-1210",
"base_model:merge:openchat/openchat-3.5-1210",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:merge:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T04:25:57Z
|
---
license: apache-2.0
datasets:
- cognitivecomputations/dolphin
- cognitivecomputations/dolphin-coder
- ise-uiuc/Magicoder-OSS-Instruct-75K
- teknium/openhermes
- migtissera/Synthia-v1.3
language:
- en
tags:
- merge
- fine-tuned
base_model:
- mistralai/Mistral-7B-Instruct-v0.2
- ehartford/dolphin-2.2.1-mistral-7b
- SciPhi/SciPhi-Mistral-7B-32k
- ehartford/samantha-1.2-mistral-7b
- Arc53/docsgpt-7b-mistral
- HuggingFaceH4/zephyr-7b-beta
- meta-math/MetaMath-Mistral-7B
- Open-Orca/Mistral-7B-OpenOrca
- openchat/openchat-3.5-1210
- beowolx/MistralHermes-CodePro-7B-v1
- TIGER-Lab/MAmmoTH-7B-Mistral
- teknium/OpenHermes-2.5-Mistral-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# Model Description
This is fine-tuned model based on EmbeddedLLM/Mistral-7B-Merge-14-v0.3 for 3 epochs.
The dataset used are:
* dophin
* dolphin-coder
* Magicoder-OSS-Instruct-75K
* openhermes
* Synthia-v1.3
## Chat Template
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Training
The model has been fine-tuned for 3 epochs on 4 A100s using axolotl.
# Shout-Out to OSS
Thank you to the Open Source AI community for bringing together marvelous code frameworks and datasets.
|
EmbeddedLLM/Mistral-7B-Merge-02-v0
|
EmbeddedLLM
| 2024-01-20T16:05:26Z
| 55
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"base_model:Intel/neural-chat-7b-v3-3",
"base_model:merge:Intel/neural-chat-7b-v3-3",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:merge:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-20T13:14:41Z
|
---
license: apache-2.0
language:
- en
tags:
- merge
base_model:
- teknium/OpenHermes-2.5-Mistral-7B
- Intel/neural-chat-7b-v3-3
---
# Model Description
This is an experiment to compare merging 2 models using DARE TIES versus SLERP 🦙
We are mainly interested to compare against [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
The 2 models involved in the merge as follows:
1. [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
2. [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
- base model: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
The yaml config file for the merge is:
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
weight: 0.5
density: 0.5
- model: Intel/neural-chat-7b-v3-3
parameters:
weight: 0.5
density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
# Open LLM Leaderboard
Note that with more tuning DARE TIES might achieve better results.
| | DARE TIES | SLERP |
|------------|-----------|-------|
| Average | 70.69 | 71.38 |
| ARC | 67.49 | 68.09 |
| HellaSwag | 85.78 | 86.2 |
| MMLU | 64.1 | 64.26 |
| TruthfulQA | 60.52 | 62.78 |
| Winogrande | 79.01 | 79.16 |
| GSM8K | 67.25 | 67.78 |
|
beom98/ppo-Huggy
|
beom98
| 2024-01-20T16:03:42Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-20T16:03:27Z
|
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: beom98/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Coooori/llama_checkpoint-700
|
Coooori
| 2024-01-20T16:03:40Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T16:03:38Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
MatrixC7/goliath-120b-wbcal-2.65bpw-h6-exl2
|
MatrixC7
| 2024-01-20T16:01:09Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T11:12:19Z
|
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- merge
---
- The calibration dataset is from [VatsaDev/worldbuild](https://huggingface.co/datasets/VatsaDev/worldbuild).
- The measurement file is attached in the branch `measurement`.
- Perplexity:
- calibration: `Module quantized, calibration perplexity (quant): 10.1205`
- [wikitext-103-v1](https://huggingface.co/datasets/wikitext): `Evaluation perplexity: 6.1437`
---
# Goliath 120B
An auto-regressive causal LM created by combining 2x finetuned [Llama-2 70B](https://huggingface.co/meta-llama/llama-2-70b-hf) into one.
Please check out the quantized formats provided by [@TheBloke](https:///huggingface.co/TheBloke) and [@Panchovix](https://huggingface.co/Panchovix):
- [GGUF](https://huggingface.co/TheBloke/goliath-120b-GGUF) (llama.cpp)
- [GPTQ](https://huggingface.co/TheBloke/goliath-120b-GPTQ) (KoboldAI, TGW, Aphrodite)
- [AWQ](https://huggingface.co/TheBloke/goliath-120b-AWQ) (TGW, Aphrodite, vLLM)
- [Exllamav2](https://huggingface.co/Panchovix/goliath-120b-exl2) (TGW, KoboldAI)
# Prompting Format
Both Vicuna and Alpaca will work, but due the initial and final layers belonging primarily to Xwin, I expect Vicuna to work the best.
# Merge process
The models used in the merge are [Xwin](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) and [Euryale](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B).
The layer ranges used are as follows:
```yaml
- range 0, 16
Xwin
- range 8, 24
Euryale
- range 17, 32
Xwin
- range 25, 40
Euryale
- range 33, 48
Xwin
- range 41, 56
Euryale
- range 49, 64
Xwin
- range 57, 72
Euryale
- range 65, 80
Xwin
```
# Screenshots

# Benchmarks
Coming soon.
# Acknowledgements
Credits goes to [@chargoddard](https://huggingface.co/chargoddard) for developing the framework used to merge the model - [mergekit](https://github.com/cg123/mergekit).
Special thanks to [@Undi95](https://huggingface.co/Undi95) for helping with the merge ratios.
|
chavudosoa/chatbot-rnn.model
|
chavudosoa
| 2024-01-20T15:55:57Z
| 0
| 0
|
keras
|
[
"keras",
"text-generation",
"en",
"arxiv:1910.09700",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-20T02:11:10Z
|
---
license: mit
library_name: keras
pipeline_tag: text-generation
language:
- en
---
chatbot rnn
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oblix/distilbert-multilingual-nli-stsb-quora-ranking_ONNX
|
Oblix
| 2024-01-20T15:48:49Z
| 3
| 0
|
transformers
|
[
"transformers",
"onnx",
"distilbert",
"feature-extraction",
"sentence-similarity",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-20T15:35:31Z
|
---
pipeline_tag: sentence-similarity
---
https://huggingface.co/sentence-transformers/distilbert-multilingual-nli-stsb-quora-ranking with ONNX weights to be compatible with Transformers.js.
|
TencentARC/ViSFT
|
TencentARC
| 2024-01-20T15:45:48Z
| 0
| 3
| null |
[
"arxiv:2401.10222",
"license:apache-2.0",
"region:us"
] | null | 2024-01-16T07:46:36Z
|
---
license: apache-2.0
---
This is the official repo for paper [Supervised Fine-tuning *in turn* Improves Visual Foundation Models](https://arxiv.org/abs/2401.10222).
<div align="center">
📃[**Paper (ArXiv)**](https://arxiv.org/abs/2401.10222) **|** [**Code**](https://github.com/TencentARC/ViSFT) **|** 🤗[**Huggingface**](https://huggingface.co/TencentARC/ViSFT)
</div>
## News
* [2024/01/19] We open source the [ViSFT]() including training scripts and weights. Evaluation codes will be released soon.
## Introduction
Image-text training like CLIP has dominated the pretraining of vision foundation models in recent years. Subsequent efforts have been made to introduce region-level visual learning into CLIP’s pretraining but face scalability challenges due to the lack of large-scale region-level datasets. Drawing inspiration from supervised fine-tuning (SFT) in natural language processing such as instruction tuning, we explore the potential of fine-grained SFT in enhancing the generation of vision foundation models after their pretraining. Thus a two-stage method **ViSFT** (**Vi**sion **SFT**) is proposed to unleash the fine-grained knowledge of vision foundation models. In ViSFT, the vision foundation model is enhanced by performing visual joint learning on some in-domain tasks and then tested on out-of-domain benchmarks. With updating using ViSFT on 8 V100 GPUs in less than 2 days, a vision transformer with over 4.4B parameters shows improvements across various out-of-domain benchmarks including vision and vision-linguistic scenarios.
## Installation
### creating a conda environment
```
conda create -n ViSFT python=3.8
conda activate ViSFT
```
### Install pytorch
we use torch1.12 with CUDA11.3 on 8 NVIDIA Volta V100- SXM2-32GB GPUs
```
pip install --extra-index-url https://download.pytorch.org/whl/cu113 torch==1.12.0
pip install --extra-index-url https://download.pytorch.org/whl/cu113 torchvision==0.13.0
pip install --extra-index-url https://download.pytorch.org/whl/cu113 torchaudio==0.12.0
```
### xformers installation
Flash attention is required for running EVA-ViT-E.
please refer to [xformers](https://github.com/facebookresearch/xformers)
### loralib installation
```
pip install --user git+https://github.com/microsoft/LoRA
```
### compile MSDeform for Mask2former head
```
cd ./mmf/models/visft/ops
sudo sh make.sh
# back to root dir
cd ../../../../
```
### Other packages installation
```
pip install -r requirements.txt
```
## Dataset Preparation
export DATA_PATH=your_data_path
### image caption
Generating hdf5 files for image caption following [hdf5](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning/blob/master/create_input_files.py)
file strcture:
```
DATA_PATH/
└── processed_datasets/
└─── coco_caption_hdf5_files
├──TEST_CAPLENS_coco_5_cap_per_img_5_min_word_freq.json
├──TEST_CAPTIONS_coco_5_cap_per_img_5_min_word_freq.json
├──TEST_IMAGES_coco_5_cap_per_img_5_min_word_freq.hdf5
├──TRAIN_CAPLENS_coco_5_cap_per_img_5_min_word_freq.json
├──TRAIN_CAPTIONS_coco_5_cap_per_img_5_min_word_freq.json
├──TRAIN_IMAGES_coco_5_cap_per_img_5_min_word_freq.hdf5
├──VAL_CAPLENS_coco_5_cap_per_img_5_min_word_freq.json
├──VAL_CAPTIONS_coco_5_cap_per_img_5_min_word_freq.json
├──VAL_IMAGES_coco_5_cap_per_img_5_min_word_freq.hdf5
└───WORDMAP_coco_5_cap_per_img_5_min_word_freq.json
```
### Detection & Segmentation
file strcture:
```
DATA_PATH/
└── public_datasets/
└─── coco
├──train2017
├──val2017
├──test2017
└───annotations
├──instances_train2017.json
├──instances_val2017.json
└───image_info_test-dev2017.json
```
## Training
### Stage1
To get compatible in-domain task heads. Using 8 NVIDIA Volta V100-SXM2-32GB GPUs for every in-domain task head.
**For eva-vit-g**
Preparing weights from [LAVIS](https://github.com/salesforce/LAVIS)
```
wget https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/eva_vit_g.pth
```
Adding your weights path to configs under dir:./projects/visft/configs/stage1/eva_g/
```
backbone_dir: path/eva_vit_g.pth
```
Implementing training
```
bash ./scripts/stage1_train/eva_g/caption.sh
bash ./scripts/stage1_train/eva_g/detection.sh
bash ./scripts/stage1_train/eva_g/segment.sh
```
**For eva-vit-e**
Preparing EVA-CLIP weights from [EVA](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_E_psz14_plus_s9B.pt)
Extact ViT weights
```
python ./scripts/preprocess/extract_eva_e_vit.py
```
Adding your weights path to configs under dir:./projects/visft/configs/stage1/eva_e/
```
backbone_dir: path/EVA02_CLIP_E_psz14_plus_s9B_Visual.pt
```
Implementing training
```
# can be executed in parallel
bash ./scripts/stage1_train/eva_e/caption.sh
bash ./scripts/stage1_train/eva_e/detection.sh
bash ./scripts/stage1_train/eva_e/segment.sh
```
Or you can use the weights we provided.
| In-domain Heads | | |
|----------|:-------------:|:-------------:|
| | EVA-G | EVA-E|
| Caption Head | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_caption_heads.ckpt) | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_caption_heads.ckpt)|
| Segment Head | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_segment_heads.ckpt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_segment_heads.ckpt)|
| Detection Head | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_detection_heads.ckpt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_detection_heads.ckpt)|
### Stage2
**For eva-vit-g**
Adding your weights path to configs under dir:./projects/visft/configs/stage2/eva_g/stage2.yaml
```
backbone_dir: path/eva_vit_g.pth
caption_ckpt_path: 'path/eva_g_caption_heads.ckpt'
segment_ckpt_path:'path/eva_g_segment_heads.ckpt'
detection_ckpt_path: 'path/eva_g_detection_heads.ckpt'
```
Implementing training
```
bash ./scripts/stage2_train/eva_g/stage2.sh
```
**For eva-vit-e**
Adding your weights path to configs under dir:./projects/visft/configs/stage2/eva_e/stage2.yaml
```
backbone_dir: path/EVA02_CLIP_E_psz14_plus_s9B_Visual.pt
caption_ckpt_path: 'path/eva_e_caption_heads.ckpt'
segment_ckpt_path:'path/eva_e_segment_heads.ckpt'
detection_ckpt_path: 'path/eva_e_detection_heads.ckpt'
```
Implementing training
```
bash ./scripts/stage2_train/eva_e/stage2.sh
```
### Get LoRA Weights
You can extract expected LoRA weights by
```
python ./scripts/postprocess/extract_lora_weights.py
```
Or use the LoRA weights we provide:
| LoRA weights | | |
|----------|:-------------:|:-------------:|
| Iters| EVA-G | EVA-E|
| 5k | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_lora_5000.pt) | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_lora_5000.pt)|
| 10k | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_lora_10000.pt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_lora_10000.pt)|
| 15k | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_lora_15000.pt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_lora_15000.pt)|
| 20k | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_lora_20000.pt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_lora_20000.pt)|
| 50k | [weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_g_lora_50000.pt) |[weights](https://huggingface.co/TencentARC/ViSFT/blob/main/eva_e_lora_50000.pt)|
## Evaluation Benchmarks
- [] Zero-shot Image Classification
- [] Zero-shot Image-text Retrieval
- [] OCR
- [] Grounded Object Indentification
- [] VQA
- [] Image Captioning on NoCaps
## Acknowledgement
The code of ViSFT is based on the official implementation of [mmf](https://github.com/facebookresearch/mmf), [EVA](https://github.com/baaivision/EVA/tree/master) and [LAVIS](https://github.com/salesforce/LAVIS/tree/main)
## Citation
If you found our work valuable, please cite:
```
@misc{jiang2024supervised,
title={Supervised Fine-tuning in turn Improves Visual Foundation Models},
author={Xiaohu Jiang and Yixiao Ge and Yuying Ge and Chun Yuan and Ying Shan},
year={2024},
eprint={2401.10222},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
Coooori/llama_checkpoint-600
|
Coooori
| 2024-01-20T15:42:46Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T15:42:44Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
ZYM666/q-FrozenLake-v1-4x4-noSlippery
|
ZYM666
| 2024-01-20T15:39:31Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T15:39:29Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ZYM666/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Kooten/Aurora-Nights-70B-v1.0-IQ2-GGUF
|
Kooten
| 2024-01-20T15:32:46Z
| 2
| 1
| null |
[
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T20:10:24Z
|
---
license: llama2
language:
- en
---
# WinterGoddess-1.4x-70B-L2 IQ2-GGUF
## Description
IQ2-GGUF quants of [sophosympatheia/Aurora-Nights-70B-v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0)
Unlike regular GGUF quants this uses important matrix similar to Quip# to keep the quant from degrading too much even at 2bpw allowing you to run larger models on less powerful machines.
***NOTE:*** Currently you will need experimental branches of Koboldcpp or Ooba for this to work.
- Nexesenex have compiled Windows binaries [HERE](https://github.com/Nexesenex/kobold.cpp/releases/tag/v1.55.1_b1842)
- [llamacpp_0.2.29 branch](https://github.com/oobabooga/text-generation-webui/tree/llamacpp_0.2.29) of Ooba also works
[More info about IQ2](https://github.com/ggerganov/llama.cpp/pull/4897)
# Models
Models: [IQ2-XS](), [IQ2-XXS]()
Regular GGUF Quants: [Here](https://huggingface.co/TheBloke/Aurora-Nights-70B-v1.0-GGUF)
## Prompt Format
Unclear
## Contact
Kooten on discord
|
brainfox25/my-pet-lion-abc
|
brainfox25
| 2024-01-20T15:19:19Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T15:15:04Z
|
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Lion-abc Dreambooth model trained by brainfox25 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Prajwal98/idefics-9b-PokemonCards
|
Prajwal98
| 2024-01-20T15:16:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T15:16:13Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hieunguyenminh/v1.1
|
hieunguyenminh
| 2024-01-20T15:11:14Z
| 0
| 0
| null |
[
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"text-generation",
"conversational",
"dataset:hieunguyenminh/roleplay",
"base_model:TheBloke/zephyr-7B-beta-GPTQ",
"base_model:finetune:TheBloke/zephyr-7B-beta-GPTQ",
"license:mit",
"region:us"
] |
text-generation
| 2023-12-28T17:33:45Z
|
---
license: mit
base_model: TheBloke/zephyr-7B-beta-GPTQ
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: v1.1
results: []
datasets:
- hieunguyenminh/roleplay
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1.1
This model is a fine-tuned version of [TheBloke/zephyr-7B-beta-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-beta-GPTQ) on the [hieunguyenminh/roleplay](hieunguyenminh/roleplay) dataset.
## Model description
This model can adapt to any type of characters and provide answer that personalize that character.
## Training and evaluation data
It is trained with supervised learning and will be trained with DPO in the future.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
Loss after 1 epochs: 0.6
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
codersan/mistral_7b_Enlighten
|
codersan
| 2024-01-20T15:07:24Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-01-20T15:07:14Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
beibeif/rl_course_vizdoom_health_gathering_supreme
|
beibeif
| 2024-01-20T15:01:58Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T15:01:52Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.39 +/- 4.39
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r beibeif/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
divy1810/logs
|
divy1810
| 2024-01-20T15:01:40Z
| 0
| 0
| null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-20T14:14:39Z
|
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: logs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# logs
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0519 | 1.0 | 112 | 1.4676 |
| 0.9805 | 2.0 | 225 | 0.9606 |
| 0.9098 | 3.0 | 337 | 0.8803 |
| 0.8772 | 3.98 | 448 | 0.8637 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Jelsafa/Q-Taxi
|
Jelsafa
| 2024-01-20T14:58:41Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T14:58:39Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Jelsafa/Q-Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Himitsui/MedMitsu-Instruct-11B
|
Himitsui
| 2024-01-20T14:56:39Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-19T18:16:15Z
|
---
license: cc-by-nc-4.0
language:
- en
---
Included in this repo is the full precision weights of MediMitsu-Instruct.
(☯‿├┬┴┬┴┬┴┬┴┤(・_├┬┴┬┴┬┴┬┴┤・ω・)ノ
Hiya! This is my 11B Solar Finetune.
Included in the dataset I used to train are 32K Entries of Medical Data, 11K Rows of Raw Medical Text and lastly, 3K entries of Instruction Tasks (・_・ヾ)
Alpaca or Regular Chat Format Works Fine :)
(。・ˇ_ˇ・。) You should not use an AI model to verify and confirm any medical conditions due to the possibility of Hallucinations, but it is a good starting point (ノ◕ヮ◕)ノ*:・゚✧
|
Jelsafa/q-FrozenLake-v1-4x4-noSlippery
|
Jelsafa
| 2024-01-20T14:55:58Z
| 0
| 0
| null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T14:55:55Z
|
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Jelsafa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Artefact2/Aurora-Nights-70B-v1.0-GGUF-2bit
|
Artefact2
| 2024-01-20T14:53:57Z
| 6
| 2
| null |
[
"gguf",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2024-01-14T13:59:37Z
|
---
license: llama2
language:
- en
---
<img src="data:image/jpg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAoHBwkHBgoJCAkLCwoMDxkQDw4ODx4WFxIZJCAmJSMgIyIoLTkwKCo2KyIjMkQyNjs9QEBAJjBGS0U+Sjk/QD3/2wBDAQsLCw8NDx0QEB09KSMpPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT3/wAARCAGAA/MDASIAAhEBAxEB/8QAHAAAAgMBAQEBAAAAAAAAAAAAAAECAwQFBgcI/8QAPhAAAgIBAwMCBQIFAQcEAQUAAAECAxEEITEFEkEiUQYTMmFxFIEjQpGhscEHJDNSctHwFWKC4WMWU2SSsv/EABkBAQEBAQEBAAAAAAAAAAAAAAABAgMEBf/EACMRAQEBAQACAgMBAQADAAAAAAABEQIDIRIxBCJBE1EyYXH/2gAMAwEAAhEDEQA/APjyRLAkiRoIMDwAAluWPHaQAKWBpBgaQDSGkCRNIgEiaeBYF5CtNUsFrllGeBKUwVZyTjHJVCWWaKyVkKJZCG44xyXVw3Jo06erKQ9TXiJr0tWUR1sMRZjfY4FsfUEIllkfUOEToqUIbEnAthHYk4BqM7hsYtQsM6c44ic7U8lhWGSINFskVsrJKOQccEofclLDArii+K2K4rcuitgqEkVtFskVsCtiRZ8vKyQxhhDSJJCSLYQcgoSAbWBFCZXItlwVSCK5FbJyIsiEicURRZEKaBoBgQZEm0RCBIkhIkgpoeAQyiuRWy2ZWQRE0TwRYCwSSIkkBNIGgQMCA0NLINACHgEDYRBojgmyIASREkgqaQMaQmERYhsAoSJpEUTQEkiSQkSKItFci1lUhRU0RZNkGQCRNIiiaCJzhFQyUlieX6iM0s7AJImkRRJBQyEibIMCPa3wRwXKaUcMrbywgQ2CGyKjjIsYHnAclAkPAIAATGJhEQAkiKWAGBQh4AYCESZEBCJCIIgSxkWAhCwMAEAwAQDwACAB4AQDwACAYsAAAAAAAQXolgSJxW5oLAYJySwRSAWAJNEQAaQDSIqSRZFEYothEAcdiGNy+SwipLLCpxWxVZLDNGMRMtvJRbXI21M5kJYZuonnBKjdBGmiGZIr0latsUWdSOlVc49vk524jbo6Mw4MvUoYizt6Kj+F+xy+rwxk8073tnfbzco5kWwrJKvMjTXUen5LqqFePBNQ3NSq2Eq/UWVuMt8MQOPqV6md/VRxBnB1C9bNQY5rCKXyaLTOzSFwPOQSyyTjgKIcl6WxTAuT2CoyK5FjWSEkER72lghnLHISAkiyubiQRKJRNvLEwEFIrkWMrkEqqXJAnIgRDROJBE0BNDEgZRFkWTIvGSIETRBE0FSQ2SrwuRSxnYoqkV+ScyBEDZAmyIUkSQiUQLYYxuQmNEZARRIiiXgBoGCEwiLENgotgJE4kWsEohUxMnFJtZHNJcFRUxDYYIBE0RRNIKkiQkSKJquLrz5Msy1tpY8FUxRWyBJkSCSJEETQQmxDZEKkixLYrRNPYAZWyxlbCIgAEU0MSAoTQIGNAADEAyLJEWAhoQ0EADAKQxDAQDYghAAEGvTwg4b4yZr0lN4IqTjwxN55CkIYBCAAAAAADA8AW1wyBVgDROrCM7WGAgGJgIAAAAAA0okRRIB5yNCJJAJkSbQsARJRDtJJBVkUX1opgjTWgFYtiqCyy+1bEao7hUpLETFbydG1Ygc6xblFXBfTZhlLQ47BHZ0l7Uk08M7+hm7prueTyWntxJHqOiT77InDy+oley0lP8D9jgdbjibR6zS1/wC6r8Hl+tx/jSR87x9fu537cKurMjZXSOinJupoy0eq9rql04iVxq9R0p04iUwq9TN89tyuVr49sGeeuWZs9L1SPbFnnbFmTO/F9Kw3IztGm9blDR0VDgechgaQEoFuCuCLStIsrZZIrYRW+QQPkEyIkiaIImiqkJjEUJlUjbV8r5T78ZMU+WRKqlyQJSIkQ0WIriTQE0DBDZRBiG0LyQNE0QSZNFEkD4GkD4KqmZAnMgjLIZBk2QYUFkSCJxAkRkaGofK+5nkURJEUSIBAxoTAj5LItYKmNMCUt2OJDknECYmMRQhDYEAiaIomiiSJCRIKhIpkXSKZiorZEbEQSRIiiQRFiXI2AU0SRFE0AMqkWMrYCARKKyyIEDNENNKSzgrtqcHuVVQ0JggJCGADIsZFgIaENAMAAAGIYCEMQCAYgAQwIiJoho7J6d3Rx2rx5KCcdRZCt1xk1F8oCtga9BVp7pzjqJY29O+DNbGMLZKDzFPCfuBDADABF1c8FI08AaJ2ZRnbywbbEAAAAIAAAAAA0Jk0ypMmmBNE0Vpk0wJAkCJoAjDJJ14HF4Jr1MBQiaa44RCuGTVGv0k1Wa3knTDIWR9Ro01eQ1FWojiBzpLc6usWEc5xLErO0CRZJCUQiMX2s9P8N2d18c+55iSwd74bsxqY/k5+abzSvrWlh/ua/B5LrCzqZL7nsNHv09P7HkuqLOsl+T43jv7Vy6ZtLTlcHRooy+COkp9COlpqdzpe/bLLdTiJnrp5Orqa8IohViDZ1479Nx5frKxlHnZR5PSdbX8Ro4Moelnu8V/VuOXevUZ5I2XR9TM8onaNKcAkTcRYNKtoh3snOHa8Ea8rgm9+Q0pkVyLpoqkglVMEhvkERDRZEgiaKqQhiKISZVIukUzIzVTENiIgSLEiC5LktgGkGNho7HTunQvpc5Mo4ctiPk2dRojRe4xMZBOL2JIgkTRVTW4pLBJEZAUT5EkOfIIiIsgyyRDAQLksSIpFkUFDWxBlrWxVIoSJCSGyAExoTCIMBgFCLIkEiaAlgGAihAAEDRNEYk0USiT8EUSxsVRDty+4z347ngnMpkSorYhsCBolgSGwExIAQEkTRBE0URkVsskVsgRdpknNFBZXPtkB36IQVXg52v7VJ4FHWNQxky3Wub3ApY0hDRA2CBjRQmRZNkGAAAAMAABwaUk3wTscW/SVk3CSjlrYCJEkxAIGMGAhDBgRAYEREB4EACAAAAAAAAAAAAEAAAAAAWJkkyCGBYpE1IpTJJhV8ZE0zMpFkZhGlMnDkzqZdXLcK3UrJrUcQMuneWbWvQYqsrhmZt01OxRXDMzqUVYrbLa1HI16wzB2nR1//EaMihsNZrNKG41DYscfUWqv0jWdYLY4NnR9R8rVR/JRfHGTNTY67k17izZivufRtQrulJp+Dzuu9Wul+Sv4V6unoHW5eBW2qzXvHufFvF58nTl06+lq/ho6Wlp2KNLDNSOtpqvQcPl7SObq4blfy+2ls26qv1kbq+3Tno5vpXh+rx7rpHGuq7YHf6lBu+T+5ydVH0cHv8XXpqVwrYZkzPOB0p1cmWyvc9UrcrI4iUdy5wBQNtRGMSWCxQBwK2zziUyRqmiiaCVQ0CJNCIyaLEiCRYkajUDESZFgQkUzLZlMiM1WxDYEZOKLUQgsliRQ1wX1ay2iPbCWxUlsKSCo2zlbJyk8tleCTEQNE0iKJookiMieNiEtkBRLkaQpckkREZkEicy+mVLocZr1L7Coprh3zSXlmqzSulJtppmVbPY0xtnZhSk3gCE1hFEkarVhGZ8lUkhtEooTRBFCZLApBEBiGgpomiCLEACY2IoQABBJE0QROJVWRQ2ERy4KqmZRIumUtOT2JWagwQ5RceRIgnEbEgAiwQMaAkiSIomiiEitlkitkCAAXJEPIAAUiSENFDYIGEQBkWTZBhCABkUAAFDLHa3DtKwAAHgMALANEsCwAsCaJYE0QRESaEAduxFk+7YgwhCJYFgBAAAAxDAQACABMbEwAAACS5JCQyhgCAKY0xDIJKWCyFmGUkkB0tNdho6UblKOMnArm4muq97Esajt6ddzR2IQxRn7HA0eoWVk7b1Mf03Pgx004euebmVpekeon33P8ib9JpiqsZmaVDFZRBZmbJbVGemK5Wp2yYHybNVLdmNm4sdXpPU5aV9uWkd7QdQV+qi2+WeMTxwdLpepdd8W35OPl8U6lpZsfYentTqj+D0Gnr/hfseQ+H9bG6uCye1o/wCCn9j4HUzpzkc6+rNv7kNVXih/g2uHdaR1tWKX+Dtz16V4fW0d05bHH1enaT2PWajT5b2OXq9Lzsenx+XGNeUtqayY517ne1OmxnY51lGGe7juV0lcyVRD5WGb5VEflHadNys0YA4mtUhKh4N63K5tkTT0zoGs6zRrrdGqnHRVfOtU59r7d+Pvs/YjbXjJlk51qShKUVNdskm13L2eOV9ilY3vugSyTlESWAhxRYohCOS3t2K1FLRFouaK2iqpmUTL5opayyViqgJNYERlbWixlUCwqpIUkSS2EwqtkSUidOnne32Jbc5ZEVosiiOGnh8onEomV2FuNimwqqHySiRfJNLYyyhMimSmV+QLUaKeTLFmip4LBZdwZvJda8lK5FFkVsRkty6K9JXJbkRHBCSLMbFciiGB4GkPBAkixISRNLYqoMRJoAIiG0BFNE0QSJxKq6KCXBfTpJzpdixheDPNlFE2RrklLccypkrNWXzUuClAxogkkDGkJgRGhDQE4kiKJFEJFbJzIECBACAkACABoRJADBCGgBkSbIMBDAAAYAgGhiJJALA8DSJKJBHAYLOwagBV2kWjQ68LcFROf0QlL/pi3/gaMzRFoulDDa8rZr2K2gKwG0GAIgMAiIDEAACTb2BrDAQDEACGIAAAAmhiQwGhoQyqYAhkAiSREnEKmkWwRCJfWssLF1MnFmuWsahjJRCGxVdsRs/nZlksdy7TEnuOUngYxW6mxd25ffqF8vGTkK1xYSvckS8+2bBfPukylA3kRoNkqpuE00RF5Cvb/C/Vvl2wjKW2T630/VRu0cWn4Pz3oNTKi6LTxhn0/wCHPiFfpYwnL+58r8z8f38+Wby95Qu6zI9bH+GZ+l6uF6Uk0zVq3lHh+oz/ABxLaM52ObqtNzsd+UMmLUVZyTnrHN5XVaXnY5dum34PV6nTp52Obdpd3sevx+Ulecs0+PBWqDtW6T7FL0rT4PXz5W505qp+xGdbSOp+n24KLaNnsdp3rcri3V8mG2s7N9WMmC6rc6zpdcyUCtwNsqit1mpTVdccF2NhwrJuGEa1uM8olUkaZLBVJFVksKHyarUZ5IM1W0LG5JoEiIlBE0hRRNIqpLgi0WJbEZAVSQVXTpb7HjPIMjgIllttvllkUQiiyKCrMrtM9pfjYpsLSqMbk8bEUtyb4IypmQ8k5kCCUS+vgoTL6+CwFjK09ydhWmKNEZ4iRbyV5EpEFvgrm9x92xCTCHFE2iuD3LluBGKLMbDhHJNwwgqloWCbQYKKmh4JtEWiKROIkiSW5VaqtRZCpwT2ZnsLEtiuwqs8yplsypkrFIkiPkkiCa4IyRJbCnLIECSENASiT8EYkvBVVTIE5kCIGIYIAQwBAIkheRoBMYMEAMixsQQAAAMABBTRJCRJICcVksUAgjZpNJbqr66aK5222S7YQhFuUn7JEqxRGrJ1NF0Sy+Ndl8lp6bPocouU7ftXBby/Oy+539N0/pfwzKUutxjrtfFenQ0yThS//wAk902v+VZ+5n//AFjrqNRO3p+k6fpJy5nHTKyx/mc8t/2/BzvX/G5xXOTt+Z8rovTlDteHqL4q22WPPDjH8RWfuyF0uu1zUruoauuS/wD5Djj9srB3ul/GPxhqdUoaDUajUzj6nVVpoSWPulHZH0TofWutavGk+KPh6dlL2/Uwo7of/KDb/qifLP4vw/4+Q/rJ9RlHp/XZKydvo0+usX8Smf8AKpS/ng3hNPLWcp7b+d1GnnRbOu2DhZCTjKL5i1s0/wAM+1/HX+y6vqVCu+HKqqbpySso7u2GHxKK4jjylz+Vv47X/BEur67V6qvrXSq334nCy2SkpJJS/l8tN/uanUY+Nr564kWj2XUv9nfVdLordbp5aTW0VR7p/pLnOUUuX2tZ++2TyMo4NyypZZ9qmhE2iIQgwGBp4QQV2OufdyKyffNyxjImIBAMAEJjEwAAACaJIQyhgAwpgAEAiyCK0XQQVZFGmmOSiKNunhnBGotUcRMuoOg44ic/UfUxG6rqpc3sF1TgtzRpZxityOrsUuCsOe+SJJ8iDJAgGluAeBIbQkBfSvUjraXVz06Ti2jl0Lc2PaIs37dOY978M/E3ysRskeur69TfJLuX9T4zTbKreLaZpo67bRYsyZ4fP+HOvfLHfD7XC6FkcprcquWTwPSvi9YUZyPR6fr1N6XqR8rvw98X3HnsxtuhkxzpyXvVQsW0kJSTJLYyxS02fBH9Hnwb+3JdVUmdJ5LFcmeh24MV+jaT2PVvTJx4Mt+jTT2OvHmsXceL1Glazscu+hp8HstVoVh7HF1Wjw3sevjz6vyebnUVfL3OtdpsPgz/AKZuWEss9PPcrU6Zq6iyVOxqjppQliUWn7NYJzqxE6TpuVyLKtzNZDB1Z1bsyX1G5W9cqxFLRstrM8oFZqhoEixwEohDiiaQ4xJqBWglsRki3s2IyRVZ5IjgtkiGCIcUWQRGKLYosUNbGe01SWxltFKqXJNrYguScuCMqJ8kCc+SBEOJdF4RVHks8ARnIgmEhICeQyIaAkQkSIMBxZbGRSiSYGul5ZfNbGSqeGXysygI43G4hDdlnbkChojgvlAg4hVaROKyw7ScVuVU0timw04xEz28lWs0ystmVGWaRKJEnEIb4IMsaIMoRJEUSRBNEnwJDfBVUz5Ikp8kCIGAAA0AIGAiREaAABsEACY2RCAaEAEhoiSQVt13S9V039O9XV2LUUxuqaaanB8NNf48GeCEpTs7YtyljaKznG/COlBQ6e1CMFdrm8YaUo0v2S/mn/VL2b4ilToZKuNuoshp6pbxlZnMv+mK3f5xj7nsOh1vQaZV9Iqt1Gv1lfqtcHGyEHxGCi21nlvPDS23Ob0DolGv1srOp3Kai+7UTtsfZFJZabW8pYT2TSXu+D6f0LUazrFDXw7pK+m9Nc+yWtsgvm2pbNxjjGNsLLf9t+fV/jpzMmuH0r/Z/wBTvS/WaWjTqW/bKUdv6qT/AMHqdD/s76TVCP6vT6eySXMIbv75f/ZHd0VdOjqdOmnO6Sf8S62bnKUvOW+X/ZcbcGXqXUo0XUaRT9d7lKyWforgszf+I/v9jlvtr9q16PQ9O6HpXHS106WjOW9opv3bfJz+q/GGg6Zp1bG1Xyms1wre8/HPt9/6HzXrvXr+r9QsulNqGcVxztCPjC/G5ytVqZ6i12SbxhRiv+WK2S/oak66Pjzz9+3sr/8AaZZGN0/0kIWuLjCUJvl+6fOOT51rupWzhhL05eXHZv8AJZNd+W/wYrYc44zg1zxhe9T6b1jVaC75uivlXPZpp7PHhrysZWGcnXVqV87K4KFc5OSiuI53wvsvBohW6tbjDw4vP7I1aP5EL4vVUK6pPMoN4z+/9Tpn9c7dcFwIOOD2vWfhDTrpM+rdF1lep0kZpTrk+22nuaSUlw93zt+55CcMciXWbGdoWCcokWioiIkIIiAxAITGJgAAAFiGhEkADEhoqgYkMgcVuaK0UQW5qrWwqrIR3OhpocGSqOZHV0lWcGLXTkrY4gcnUfUzuauPbA4d+82OWulcG1wKwnFbELTbFZ3yIk+RBkgQMlFAKXAlyOQRW4GrTxyapbFWmjsXSW5XXmei4iYrn6jfNYgc65+phO0Y3TrfpbRt0/WbqGsSf9TmtkWZ64nX2416zSfFVkcKUmdzRfFEJ47pHzfOOCyF04cSZ5u/xOOvpm8R9g03Wabcepf1Oxo9VXZjDR8To6rfS1iT/qd7pvxVbU13Nnk7/B6n/iz8K+xR7WiFkFg8ZofjKEklOR2afiOi5L1r+p4+vD3x9xLLGrU1JpnF1dKy9jqz19VsdpI52osjLOGi8Wxzrj20Jvghp4/p9TC3tUu15wzZZjJV2ps9PPdhKNY/1+o+Z2JbYMt2lcY8HY0WnU/Bo1OgzDg3PPlx0nTx9lWMmG+B6TU6FrOxyNVp2s7Hs48krc6cO2BQ6zfbXhlDgdtXWKVYo17muUNyKr3LpKg6klsEIblzgyUKy61EOzYrnA2dmEUziXVYpxK+01TiVOICrhkuhDchGODRCJViqxYRjt5N9sdjDatylVJbjlsgitxz4Iyzy5IkpckSIlFE2tiMSxr0gUS5BBLkEAySRHySTAGQZYyp8gCJJkcjTAtg8E3IriOTAurswzTCSZzk8F1duGQb+3KISgOq1SRdjKJqMvaThDcsdZKqG5rWpUZRxEyWcnRthiJz7VuyqzTKi2wrIhEoijFyeEWKLi9wgkVssktitlCRNEUWRWSBxJS4EluOfBVUS5IjlyIiEAAA1sDBAwEMQ0gEC2Bk41OUHLwgIsiMQQDQhoBjQkShFykorlvH9SK2UT/R0q6O188qp+YLhyX35S9t37FmkTqjGUF/FsT7P/bHjP5byvxn3Mt0/mWuS4WFFeyWyX9D1HQ+hT6r1KNcM16acK+63Gfl1qKcv34SXuyWtcz29T8B/Dun6hRfqtepw6XXH5SnKfap5acksctuKy14wl9vZajrFuu11PReiJabSwjiycFj5daxnHs0sJfeS9mcfV66EaK6NPFU6XTx7KK09oLy37y539zDL4gh0fo3fpElrNY32y8wgm0n/l/l5OPT0cc69t1jXrofRnPTVY7F2VrwvY8N1DqE4x1s5WOU5Q/TRk3viOHY/wB5SR5yjruueqVF+ptnRdNd0ZSck98+eGSv1nzLKot5Tobf/VOXc/8AT+hOOLL7XrqSZFbanhLaWcbvnfwQnHGU/HJGDy/wWSsbiu5J9vHv+D0R5rWaxdscGecMYNdrjKcUlj+aX+n/AJ9iN9XbJY4e/wCDTLn2x/3pS9oNv99kJxxheXyWxXfOyfju7V+3/wB5Itcv3IajHUSqsazL5U122xi8d8cptP8Ap/U9L8Qf7MNXXpY9R6DP9d0+yCsjGLzZFPfj+Zfjf7bHmJw3R6z4L+OregdtVubdDObU6/MN36o/f7eTPUv3Gub/ACvnV+nnTJxsi4v7meSwfafi/wCHOm/GtVnUugzj+vrjJ2UwX/H2yttvVxv5Tw/B8btg4txkmpJ4aaw0TnrV65xnYiTRE0wQiQgiLBoYgEAABYhiQ0UNDQhoimAAgLK1ubao7GSpbm+qOxOli/TwzJHc0dPpRy9HXmSPRaSjFefsceunXly+ortizgWbzO/1bbKODLeRvn6Xo0tii1GniJmteWbZqliwSaFjcrBdpKKJeCUVhAUz5CC3HZyOpeoEdDTR9JZJeoenj6CWMzK9EnpC7aBy7X6mdTU7QOVbyw59q2yLGxMORDQhoIki+nkoxg0UIsa5aFOUVs2icOpX0vab/qVy4M1j3J1JftrqOzR8SX14Tk/6nQo+JnPHczx7YlNx4Zw6/H4v8cbzH0Gnq9dv8yNdepjNrDPnVWsnW9pM6mk6zKDXczz9/jWfTF4fUOlYlg69lScTxfQOu1tpSkv6nrq9dVbFYkv6nzfLz1z17Zxl1GkUk9jha/Q84R6puM47YOZra00zfj8tibjxWp0mG9jDPTtPg9JqqllmGWnTPdz5l+ThTqaYlXudSzS5lwRjpHng7TyxqdMHyycK9zbPStPgcNM/Y3O43KzOrYy21HXnQ0uDHbUanS65U00Vmy2szuG5vV0Qjk1QqfbsUVxaZ0KWuzcutRhvjhHNt5Z19Wtmcm1bs0quC3C3glWtwt4CMkuREpckSMpRRN8EYkpcAUvkED5BASW5JEUSQA+CtlkitgLAIBognF4Q28kOAyUPI0yIIDRXa4s20XJnNJwscWSwdlJSRZVXuYdPqM4TOvpIqxbGLcT6Z744icu5bs7err7Ys4t31M3zdXWSzkrZZPkg0UKLw8osTcnkgkTggCS2KXyXTK8ACJwRBE4hU4rcdnARW4W8FGaXIhvkCIBYAYCBjEwEAAA4drsj3/Tnc2dQhp6+z9LJ4a9SyYgABDAIBoRJEVKMcvYurg67ISaeItM09FhGXUanOELFH1OM1mLxwmvKz4PonT7vhjW+nrXQqoWf/vaNyrz+YppZ/Bz67yu3Hj+U183/AEkoWdjWfKfhp8NfZn0b4NlJdInplNN1xjKW267k2l+0Uv3bNs7fgvRrs0vR7tRGO8VqLpOK/Cb2OM+r2w086dNXRRGxvunXUlOSy8Jy5wk8JeDne9+nXnxZ7q7q+ty3TGWFH6l5b8L8Ll/098ee1utU7nmeflxUEm8Yxwv7/wBy7USlGDkmu77pvP8Ac83rNLqVKdliSWVBqPhvLSx+z/oWT5UtnM9O/ppuUIuWzT/oKDnC597zGSzB/bbb9jl9O1s5dtFrfemkm/K+51bd4wXHpwvtwdJ6curvtpqeU3+xN77FGlm5UJy5y8jsnldq5m8fheWbcgvUnP8A5nt+PH/n3KtRe6qpKCzZJdsV92XPC2WyS2MtMPnXy1Et4Ryq17vhv/QqGq1XVCuO6isZ9/v+5CSwXtYy2VS5KjPfmFMn/M/SjNpf+FNf+7/sbO35/dP+SK7Yff3f+hRp6+1W/axr/AwbNB1XU9M1Mb9NbKuS2fa+R/EKr67G7q1EIV6qO+sqgsKX/wCWK++3cvDeeG8YpLdojXdPTWqcHhrZ/dPb/Df7EvP9anX8cOawyDNesqVdr7PolvH7fYyMiUhMYghMRIiwAAACSJJZEiUXhoon8tpZI8Fzmu0pe7CmCAFyQaKFlo6dUdkc3T8o6tC7sGO1dPp1PdJbHpqqOzT5x4OX0jT9zjsekup+XpP2PD33+2OnLxHWX/EZxcZkdfq7zc/ycqK3Pbz9NUSWImWe7Ndi2Mk1ubjNVtCwTwLAZJFmMIhFbljXpAonuyyhZkiuX1GnSxzNFJ9unTDFQoxzMvjDFRGuPqZNemRk1myOTY92dXXPk5M3llcfIrYMlKEorLTSfuiDK5AFsABE08s10LYxx5N9C2LG+UrNkY7HubLdkYbHuKvStkSSWZJPgsurjCKw+SOakaeCLGgNem1tunknGTO5pPie6vCcmeaRZB7nPrx89fcMj6R074pUq0pyN8usVXR+pHzWuyUI7Noa6lbW/qZ5e/w5u8sdeN7q/URm9minuTPIQ61YnvI3UdaTSyznfB1y53iu/wBqbL6qU2cijqdc2tzr6PUwsa3Rz6nXKZU7NMnJbGjT6FS8Cssiprc6eg7ZYMXuyG1hv6au3ZHI1OhazseztrTgczU6ZNPYvj89J1jxOo07i3sZJVbnptZpFl7HNnpN+D2ceaVudObXVuba6fSWw0rT4NsNPiHB0/0jc6cHWRwmce36j0HUq8JnAtXqZ35utylUskb1gupWxVfyaVjlyLBJ8gluRk0iUuASCfAFL5EgY0A0TSEkSigIzK2WzRUwAABADEN8iABpiGiCaGJDKJQk09jt9L1PbycRcnS0MfSZ6mwrp63UKcdji2vMmatVJxRzpz3JzMiRGfJBobeWBtSSLa1kgkXVrYCuxYKS+0pACURYJRCrYLcjcTrW5XeVazvdmqLroolC2GZvhoyjlOU/qbeCMkACYDIsaABAAAAxAEDAAACSIjRFdXo+nueo+YoyjDtaU8bP8e56Gubil3bvG+CrTz74pd2e2Kit+EtsFzRyvv29E/X0nVXK+TzJRiuWy9UVd8YqzGXht8L7mRNx4E2/Jiyu86jode/9No1dceku6yCh652Nby915Xv/ANjmVaDRamyE7pRzDdw7ln9//NxtZYOuM/qim/ujfHV4cfLxO7P/AExdZ09dGpourwmppbexKdj+RXKXKeG/fbBKfStPbLMovnOMvBXq4fK0s4/8ryv6mt2s/H1i+qXbBNcNbjpn32Sl4xhfjP8A9MyXXdmmUE8Snsvt4L6KpuGJp1152jn1NLZJ+2yOkcKuf8fMU2ocSa8/Zf8ActUVhKKSilhJcIh3KKSS/CQnb2/Vu/bwjSHNN8beTBqrGsVV575LP4XlmnUdQVVEpNJuWyXuc3S3ThqvmXxf8XZNr+hBq6fN/pq4eFn/AFLIw+triUm/6v8A+iiiSqjal/LJtf5Rt08cVdr3yv7mojFYsYfsyqxZNVsM9y+xRjuS/G5FYNTDuqa8xe3/AJ/b+hzmda6O35RzLY9s2ZoqAbEQITGJgIAACaJCRLAAgGBVAIeASIL6XhnW0L7pxRx4PB1emzXzo59zn5PpY950LT5UdjudSh2aN/gxfDsFKEWdHrvo0cvwfJt3ySOnP2+ZdVeb5fkxVwyzT1CXde/yU17H1efpuq7o4iZJLc23vKMrW5uM1V2iaLnEi0NZVxWWWTWIjhHcdq9JRkazI3aGGZoxpeo6nTq8zQtOft0ZRxUiFUeTRdHEEKmHobM69DjdReGzkye51OpPE2cmfJuOHf2vu1StqUe3DMwCDkYCGiicOTo0L0nPrWZHUoXoLHXhVfsjBPk36nY58+RU7QYs55GxEcyGAASRZXvJFSLaVmQVt7cVmK17m+W1Zz7eWFqpsam15IvkAy0VamcXs2dnp3U7ISW7PPw5OjpNjHXMs9pjtX9blGxZZ2uj/EEcruZ4bVzzMv0Vso8NnHr8fnqJeY+rQ6tVZH6kEr4TWzR88h1C2vGJM20dcnDaTPH1+LZ9Od4el1KTbMqpTZhh1iFnLN2l1MLWsNGLOuYxmNmn0Km1sa7emOFWUjZ0utTaO3dpU9O9vBx/2s6amvlvWauyTR5a9Ymz2/xHV23SR4vVRxYz7Hh6+XMrrxdidEfSUahbs16eP8MzahepnZtjkiC5LZIgluESiKzgnFbkbdkBnfI0AICcScURitiyCygqEypltqKghDQjRoZUx1tT1Kbp7l3pewGd7ETrfED6fLVxl01Yrcd0uMnJABoQ0BNMkmVpkkwLFydPRvEDlRe5uqs7ayWIs1diZzpyL759zM0hIEmSTyVjQVfEvrWxngzXUvSUUX7MoL9RyUICRJIiicQq2tFV/JfUtijUfUVaziGwIyBMYmAIABAIEDAIAAAAAAKZt6XpK9XfKNme2KzhPGTEkd7pGgtobuug498V2p8492vBnq+m+J7a6K/0lsYRjJqcm5Tznxtn+h0ERXBJHN1t0sZzjwJok1vlbMi5Ncxf7bjDUcY4WV7Djh8EHcl4l/8A0f8A2Iuxviqcv/jgvxPnF7fbzt+TJq6nelGKx7tlUtZZmSrrSS5fj+xSpam14nZjP8sVjH5ZqcxjrutMKqdO+6TUrHw3z+yLJWPH/Kv7lNVUat4rMnzJ8sm8eTo4k7H/ACLP3/8AsotsjXFysllLwv7fuQ1OuhU3FZk1yl4/89jGrnPt1L9WHtFcL3/cC2Ceothdf9D+mPhexvtqjdW4yW3+DM4pQShvW94P++C2mzKwxBknY4WQhLlzipffGf8A6OnXPtrZz9bT3Jzj9S/04LK7vmUqS4aKLpPLeSqOzkvZg5ZWSuU8Tb8NAUy9Vf7nN1UcTOlDen92YtXHMU/ZkoxMQ2IgTEyRFkCAAAsRJERgSGiKGmFSwNIimTRA0jXpLfl2pmZIsgsPYl9q+n/CevjJRi2df4kuX6SWH4Pm/ROpy0lqy2kd7qfXP1Gm7c52PndeCzy7HTl5rUy7r3+RQWxXKffY39y6HB7mtVXLYzY3NFzyypLJqJUBMscSDQZSrW4r1hE6olepZSs8FmR3OlV5kjjVbyR6PpEM7me7kOftfqlhE6a/93bFrNpGmvC0bf2Oe+na15LqjxbI5UuTp9Tlm+X5OXLk7z6cOr7IQxFYA0JEkBbSsyOrQsQOZp1mSOvUsVmnXhi1T5MEuTbq3uzDIjPaLEMQYAAAEi7TrMyk06VZmgsbLVis5tnLOlqdqzmWBelTAPIBlKPJvoeImCPJvrWKwMmonmxllFvaUXPM2KDwyDpq7I3YYVMmrGTExp+bKL2Zu0XUrKprdnJ78ltT3MdcSz2zY+p/C3UVqWk+T21kM6Vv7Hy/4Mb+dFr3PqE7orRb84Ph+fmc+SyMyPmvxOv95kjxGsX8U9r8SzUtXLB43VrNx9X8W/pF4W0RxUYtT9TOnRD+CYNXDEmeqV01gkRjySnyRjyaFkFlkbyypeohqOQMw0IaAsjwXVrYrS2Lq16Sqz28lRddyUkQhpASXAFb5EN8gllhCGgawCABpiAKkmXxswjOiWQLJSyVNg2JgGRoQAXQe50Ko+g5kHhnSpsXyyVKy6r6jOuS/UvMylAWOtxSbHEHOUkk2OKKrRUvSZtR9RsrXoMd/wBTC1nYIbBBBgiyZCXIAIlhdoghAAAAAAAAAFa+mxjPXVKeMZzv5fg9VBeeW+X7ni4ycJKUW008prwdrRdcTShqMKXHd4f/AGMdT+unN/juokZ4aquf82C1TT4aMtJ5IsTkRcgJZIWKUpOEsxivq8N/YlC2MJd7w+3eKfDfjP2XP7Y8ma+3uUnOTed5Nvd+/wDUT2biq2xSajBYj/KkufuSiu1fcphP6rJNR+7eyQK1zawnGL4yvVL8LwvydJMcrbavc1HlmS7Uuaca877OWcJF0618uU7n21xWWlu3+X/oYLtZC+rt08JJcJtY/oZvX/G+eP8Apy0dcEpaiU5rOFCvZLP39y+On0zU40qcLEsyhN8/+e6Lo1Spul3RUlOzZPzxhr8FuphFaquxJLshLufuZ+VdZxy5ukui4OtvuqfD8x+z/wC5f3/KnifPh+5jellXpo3xl/LlrhrYktVGUIxvTWVtL2OzzN3zYWw2kmuPwY6ZOq2yl7fzR/fkhCuULO5SWHxJcMunV8xxnlqceGuGETU9mVWT2F3OK35RXY8xwuWwLYf8KK91kzahZjJGriOfCWEZrHmWAjnMTRKSw2JkUhDYiBAAATQxIkAJDBIlgKSLIoikXQiShxiX1158BXDLN1NGUce+8NQopbkjTdVKMN8m7p2j+ZYtjR1PSfKhwef/AGnz+LUrgwhll6rfaX6fT9z4OhHQNwex068sjWvP3ZTIRZ0NTpGpvYo/SS9jU8ksNUZyDRbKiS8EHBo3OpTUq47GbVI2wi1Ex6rk3KVnqfqPUdGXoPNURzNHrukVYpyY8tyHKvW/WE7XHS4+xZrIfxCnVrt0/wCxz5suN/J5bXSzZL8mF8mvWP1sxs9UcrfYBsQBkIkhIceSjVplmSOtBYrObpI5aOrjFX7FdufpytW/UzFI16t+tmRkc+vtEQ2IMgABASRr0azIyI9N8JaTR6u+yOraTS9OXgLPtzdW8QObM6/WYQp1dldcu6EZYTXsceYXpBLIgyAZThyjcnir9jDX9SNknioDFY8zZFDl9TBASTJZIEgGpF1U8MznQ6l0u7pD0nzrKZ/qtPHUQ+VNSxGWcJ+z24JUzXs/gi5O+MX7n0nWP/dML2PkHwdrFVroJvyfVtRep6ZYfg+H+Xz8fK5308D8QQlHU5fGDy+ohm09l8Qds5Hl7a8yPb+N3+pzU9PDNRi1sN2dCh9sMGPWPLZ6eb7a1xrYYZCKL7lhkILJ2aSpXqZVqPqNNMeTNevWwM5JciY48lVclsX1r0FXhGmC/hFajFd9RSXXfUyoiET8ESX8oRVLkQ5ciCG3kEIAGAAFA8iAB5FkQBDyMQBUkXQsaRSiQDnPLCLK5cji8BF6LI8lEZF0JZZBsgsVmC76mb08VmC1+plVS0MGAAQfJJkGEAwEFAxAEADQAAhgAgACKv0+rs0+En3R/wCV8HW02thevRJxl5i3ucIlFuMk4tpryvBLzrXPVj0qvsXlMHqpLmD/AGM+ku+fRGT+rhl7js/wYddReqxu4vdFd1rnXBYw5vj7BjNal9s/2I6j0zrxyo4X/n7m+XPr/iM7MSUYru7ffj8lNuovrXdXjOcttZyaKqHJbbL3HfUvlNP25M261JjVptTVqa4TU+1/zRT3XumTl8vtUpSqjJcrKaPPfIXc9m37YIqEPnRW638j4nzx3pazTwk52airu/6s4/CRiu1y1eaNNGWJbTsksbeyRzVWna0lvKWEl7I6mmpVVTkkuNvuanCXyX+HrH/ujgvOIL/Uy2Ri3OtreLUl+GX37qEfGUvyZdRPt1EJ+PpkbrmHXbR6q/VF7tF9OojYttpeUycH/Dx5WxRbTGdmVs+1vKAncu7eO0v8lUP4k0/bkhXc94WfUvPuHf2zbXDQRoslhYRjlPOoiiVlvbHL5M0G3YpffJBGz65fkiSlvJ/kiAgGIBAAEEkyRXkkm2wLExkXFpC7gqyJprMkZYL4WJeTPQ3VLdHToXpRyKLV3I6lN0O1bo8nllZr03QKFOzOB/EMFF4NHwzKHY5NozfEVqlekmj5nPVv5GNxk6dp+7Gx3oaNKhvBz+lwzg71yVelz9iefyX5YlrzV+li7XsSh0+MlwZr9fGN8t/Jq03UYPGWjV+cnpPau7pKxsjnX9NcHwenr1ELFyiN9EZrgeP8jrm5V+TzC0TVWcHF1sO2bR7uzSxWne3g8X1SOL5L7n0/B5Pm1LsZNHDusX5PbdKoxQvweU6ZV3WxPd9Pp7dOvwZ/K7yJbjmaur+KvyZOqQ7NM/wdXVpfOX5OX16ajp8fY5eLu2yJ8ni9U8zZlLtQ8zZSz6cCAAAaJRW5FFkFllHQ0Uc4OlNYqMmihsjbqFioO3P04epeZsyPk06h+tmZ8hypCGxBkDQhoBrk6WiWN0c6PJ09Gtg1yr1n1GCfBu1j9RhmDpWCAaDKdS9SNVrxXgzVfUi65+kKyPkEIaCJEiIwAEt+BhFbhW7puqel1MZp4wz6V0/4hr1GljCUlnB8rWxop19lDXbJnm/I/Hnl9/1mzXvOrWqxtp5OG1mRhr607IJTeS2nVxnNbnn48XXjmVz+Njp16fMMnJ1qcJtHodLOE6tmuDjdTivmvBvxd/thK4l0ckIRaRosRWtkeuV01OlelmS7ebN1Uf4bMVq9TLPsZ2iUFuDHWvUaaXY4NcY4pM/bujb24oCuVd9TKy236mVYAPJJrESONycl6QihiG+QCEPDwB6To3TdPr9HOU2u6KA80Mv1lH6fVTrTyk9igAAAAMANc7g8Z2AQAADRLOxFDyAmCExoKeS2p+oqLafqQG3OKzFN5bNs2vlmGXJIiDGuBMZQmQZNkQEMAAQAADQANIAE0NbMHuAgwNRyPZeV/UikkXVV9zIxw3s0/snyfSPhb4Qn0+cNZq6o3a54dFDWY0Z3U7P/AHe0PHL8Gerka5m1wr/h2/o3S9LqNVLtu1Em5UNb1RxmPd92svHjbyZj6N8X9IjV8I6m6xueohOu2U5Pf6sP/LPm3ec+brvZkVtOuDhhuPEX/ox9qstr7uFn/BPecoxXLa/yZ4Takl5TzE6z6cuplbeODHqrM+lLubfH4NDsTRnTT7p43bMtOdZOyM5JNKXL92Qjd8xx70nJPZonrYNWq2L2awR0NHzLHOX0xef38G5dcrPbbpNPhd8tpP8Asi+yxOarjxFZf+iKrtUqY4jhy/sirTvEG28tvLb8mmRbPOpS9kv8me+Sbw+GEp51P/VnBmss79iK6VcsbfYWed+IlELMxhL7DcufuwKrUlLu84Ixlle7I3WZeERj6a5SfnZAOxOeGRW3HCHhuC/JFvCwv3CIsBiAQAACAAAiTg8MghgXymmikEwCmNPAgAsja0XQ1co+WZRkvMo7/TviOeji45eA1PXpaq9Sb2RwBnL/AA8fy+Wex7Pp3xDCrHc1sdfU/E1dmlaUlnB82UmuGyaunjHczz9/g8dXR2dRrvmXSkn5JVa1rycRWMnG9o7/AOMzFet0HUX8xJyPT13Rsqjuj5nVrXCSeTs6X4gcIpOXB4fP+Jbd5Zse5tS/TP8AB4Dqzzqpfk7j+JIS0zj3b4PL6vVK7UOXuzr+L4+ud1Z6jsdGrzNfk9pVONWnX4PD9K1ca2ss6et62o14jLwc/Px131jF+2nqGvjG3KfB5/q/UPnRxkx6jXyum3kx3WOR6fD4PjlrUmM1jyyslLkR6wgHgAGi6mOZorijTpYd1iCuxoq8RRdrFiplulqxBENdtWya6y+nnb/rZnZov+pmdlcqQhiCAaESSAlBeo6ulWIHLrWZI62nWKw1yx6t+tmKZr1L9bMc+SlQGgGiMp1vDHbPKILZEZPcoiNCGQNMkRQ0BIceREoFE0tiEuS1LYrktyCvLT2LK7pRezK2gSGDtaLqs61htjv1qtm3k5dSIWSalsY/zm6mR0HNSI4MML2jTC1NEsxmzG+mH+7yZzbV6n+To1TxpmYJrLZOb7pGaS3JVL1DkiVK9R0blaIxzJG+ccaf9jLVHNiN+oj26f8AYzb7XXBu+plSRbb9bII2EluSn9IktyVu0SjM+QH5G445IiJfRrLtLn5Vjipc4KUDAc5ysm5SeZPdsiAwEAAVAGAAAAAABjiEiKiAAA8llTxIqJReANkp5gZpcj79iDeQBjEAAyK3ZJkQLbavlpPOclQ3JtbsQAAAADyIAGSSyKKyej+GelxjbX1XqM9PptBT3Srs1L9N1iXpUYr1TSlhvCxthvcW4IS0mj+Hqa59Qojq+pWRU46SxtV6eLWYu3GHKTWGobJLnnB0eldJ+JOvyV9U/wBHppcWOKqrS9owit1+F+5dout/DPSdRZq7atZ1rqVk3ZLUX1KMe57tqLe2/umy2/8A2p6rD/S9Opg857rbHJ/2SMXf5GpJ/Xs+k/DtfS6lLUW/rNTHD+dbVGKi/sks/jLOpX1TT6P00pSn7pbL7o+Z6Hq/xB8TauuiesnVCyWPl6eKhn98Px99j33SfhamEq5XXWWwSy8zeLMeX9svb35OPfr7d/Hl+oh1GX/rEL9HqL66o31ygu6W7bWE8fnB8tlCVc5QsWJxbjJezTw/7pn2TW26fRVOFFcK3jHpikfL/iGhQ6tZfBejUfxFtxLiS/rv+44dOnNrl2WRlz2tPH4ZVq6FCbSeY8xa9vDNEoKpZubi3xBLMn+3hfkpnb6H3ReM7RUstHbnXDqxRXOTl2yWX/n8D74wTW7fOMbkZ2V2fQ0mvDWGg+cprtuinjz5NWb9sbn0o+TO5NS9MM5WeUgdihD5dCxFeS6VUZx9MnJezllFFlVuMQrb+yWEXMZtUN9zcc5eMl1Us1lVekuU1KTjH3TeTT+n9pY/CCMSmla1N4Wcp+z9x30Y7pR4e5pegg225S3+xZXp3XHtjZt7OORhrDVL+Hj2YSm5Pthubno65J93an7wyv7GeemlUvRLMfxuMNUx0/mRKUU1vwvYTtl7r9wg8p+3AFU228cLwQLGsScX+xBrD3AQhiABMYmAgAAiKGJEgoQxAFMZEkEAAAAMS4GFAAADAAAYZEAElZJLlh3sSi2Di0BdXqZQ4Y56qU+WZwyTILlYDnkpyGQJNiFkMlRLDYI0RnBVY8lGcsCUUdHp1XdajHBI7XRqlO5GOusjOu1VRitbGDqUcQZ6OOmxUji9Xq7YM4+Py7W+e3kb/qZVCPdNLOC7UL1soXJ6UTurVeMPJSTk2+SIQIaESQFlKzM61KxUcyheo6taxSGua5uo+tmOfJs1H1sxy5KVEYIbRGSzsRY2JlUAABAiQkAU8lkCsnEC5PYjJEe7Ad+SIjLkEDYIC2ryVXfUW1PZlNv1FaVk4TaIDREbq9TiHbknH1I56k0aKrsLDM3lmxOcdyVUdxdykWVck1NbdJX3XI36+tw0+/sU9Kr79Sjrdbo7NHn7Hm77zySM2+3i7fqZWkW2r1MjFHrdSivUiV6xEcF60GpWEUZHyDbfIMABAxrgTAQwABAlkCUGk9whNYEWWNMrKpiAMhAGQAKAAAAAAB5DIYAgMkskQQQ2IbEAAAAAABQIYkWVQ+ZZGOcJvd+33IOv0TpFeoqt6j1B/L6bpcuz1YlfLbFcPu21l/yp59s4+pdUv6lq5ai9rua7YxisRrgvphBeIpbJGzV3yu+G1bFdlT1S09UP+SEId2Py3PL92jikipLc0aOj5+oUX9K9Un7LyUQi28HovhnR/M6hoYSimtTq66mveMWnL/KFvojuVwek1ei+H9Cmuoa5xWqsjzpqXu4L2k47yfhNL3Pp+uujotLZasRSShBLhJf+f2PCfBvTbaeq2da1+fn6y2Vvq5jW23/WT/sl7nsfiNO3oynBrEMTe6XOTzd2W49PEsec6prPl6OVtlkYvw5vCy+Ms851O3Swp0kNHepWwTlOyMfMks7vzlI6Guvo6j/BdVlnY00optZZzupaGVNTsvujpoY+mx+p/iPJrmL1a4Fm9kpcb7tvd/dme6yMYvfH3bIW6hSjNR+Y5J4UnhRLF8O62zptXUr42R0lsnGFmMptf4/PnDO2uOOdbfBvlyZB3zXGV+WdH/06iKwu7Pvncql0yvxOf74ZcrPphV9mcqe/2LV1C6P1dsv2wF2gthvCfcvbGGZXGXnLx/YnuNer9Nb6nL/laf5E9bbJem39u1ZRkUU/JJRUXn/I0+DVHV3J4cnL8o01arUSxmhNfnBn006s47VGX3Nq+6lH7pmoxZiU5QUU5Zh+fBCSeMxaaJvuUdn3L+5jsTqbnTxzKHj9jTIsqU842kVzj2U4XPkmrVYsraSCTU4p+5FZc98d+UElmCfkbj2t/Yb+h/cgq8iGACYhsQQAAAQRISGFA/AsDQCGAwAQxANDEhhQAAADFgAGOKyyJKLwwNMK00RtikEbcIrsnkIrfICYBQAYAIAGIAyNMWBgTjY0dXpXUVp7E2zjkk2uCXmX0Y+hafr9M60pNGDquururfa0ePjdOPEmSersaw5NnGeCS7EvKy95mypLcg7M8hGR3hiUiOBuWRZAEiSW44LKHFeoI0UQ3R1IR/hGLTwy0dONeKjnevafJxdSsSZilyb9WsTZglydI1pLkk1sKPJNrYorYiTIgIBgADBAAE48EEWR4ClIryWSKxQZJdxDyMiLoSxEqm8s2aeOnell3td/53MT5KpAAwAE8AARONjRopuw9zINPBLNHqOgWxer3Z3evJfov2PDaHVy09yknjB3tV1danS9jfg8Xm8PX+k6jn1y4Fy9bCuOQtacmTqWUev+N6UI/wAQjq1g0VRzaVa1YYl9msDESaFg2p+CPknjYgwBjF5L4QTQFDAnZHD2IAAhgAgGACBDAKAwGQCDAAAUwFkMhDEAAPOQEMAEMAAAAAJQeG/uiIAd3p9EupfCnUNLT6r9FdHWqC5lV29ljX/T6H+MvwcJGrp+v1PS9dVrNHa6r6nmM148NY8prKaezydW+7oHUk77K9V03VPedWmqjZRJ+8U2nHPtul4M/TX241WE9z6H8G9ChV0irrXV4SjTVe7NDXnEtRJpJ/8AwTSefsec+Gum9P6h1+qq2q6WhrzbfZZJJ/Lju8RXlvCWW+T2vUup2dR1XzbcQrgu2quO0aoLiKXskTq+sjfHO32ut6lbda5Pedj4X9kv6EL9TZ8mcdTZ3xgu1TclGEcecvPjhJN/YtsnoejUw1GsfztZKOa9LF7Rzw5vxn25PK6vW6nqWt+ZNOUpPthCMdlnhRicPi73rfoarrmq00XXo510x4coxfdJfl7/ANkci6id10Zal2WW3NKFKfdZY3x+MmTrErtPrv08pr50VmaW6r+3s37nd+DKoVdd0Oq1LcpSui8y3ainnP74f7L7m59aluXGvU9I0/w3fp49SpjqNal82emjJKujzFN790s4b8JfnJifW9eumS6ctTJaNtv5WF2rLy8bZxnwQ6x1GXUuq6nVS5tscl9lnb+2DD4OnPM/rl11foMTBsGbc0JRyY9VpfmJyjtPw/c2t4ITW2QjiY7s59MlyRzKPP8AU1a6rsmrY8PkzqSmv9DnZjtzd/8AqLflGvS6xxahN5T23MvbFeAzFeCy4Wb9u1Gaccx4KrIp+qPPkwV6mVeJrLXEl7/c1/NUoqcXlM3rlZjLbDsntsnwOE32tP3yTt9S/BRnC28kROa3yQTy3jjBdXiUd9xWR9PpQGYBtYEEAmMTCkAAERTHkhgNyKmBDI8gTAh3DTCpgRyPJQcDW4uQSwBIBAAwAAgGRyPIU8sBZDIQwEMADAAAYGAAIBiaAENCGFAmMQAAAEGQyABE4zwThP1FIJ4A6+lsj3I7kO2en29jyNdzg1g6VHUnGGMnDyeO33HPrlDXxxYznS5Nepv+Y2zJJnbn6bggtyclsRr5LJ8FFLESZEqkCGAAgY0gYCRZFbFaLoLYCEyssmVgIYAA0JjEwAAAAAAAAAAJJk1bJLkrGiKm5tltVuDONPAxHS0s07G2Va55mZ6rHF5Cyzve5nPes57VNEcE2RXJpTa2IMsfBBlVEsU2kQABylkiMAEAAwAAAASyxyjgQN55AAAAAAAAAAwAAMAAAAAAAAMAMMAIBhgguu07phU3KEvmR7sJ8fkrS3BI16GiMnZfbFSqoj3yT4k28Rj+75+yYV6b4dp/Q9PlKckrNRiTj5Uf5c/nOcfh+x0q9ZXFXXWLujR2pLw5vjP4Sbf4XuedhqXo9A77pOV1r73l7tvglptRJ9ErlKWZWX2zl93iCX9jFjpzcbJ6ud83Ox8vLO7DW6H4e+G7OpuULupWp10w5+Umnj/5Nb/ZbeTyyU3VGSXok+3PvjDf+Uc7qFzuurqcn2JuT/7/ANidc/KY1O/jdZqK7NXql3ybsul3Sm+fdv8AyzvUaj5V9Th6cNyS9kovC/wc7QpfNts4ahhfbO3+Mltc83zl4jB4/wAG8Y3+mnlok3sQi92ClnJpkZB/T+BZHzFgRnwmC9URRfdFxfKIweJYYFNsPmVTh5xt/ocd5i8o7dnptXszl6qHy75x8N5ROl5Vxn3L7g9yveLLoRc+OFyYb+Xr2cIP9PNvjwPT29ksP6XyWyf8CS+xmXCNObS3u0USbyWQfd+xNVKSKKITcZbGnu9OSMalF5FOXpYRCxJ7orJ/yEAgEMRQYAAIIJiYkSSIpYETwCQEMElEmokkiiHaLtLcBgCEUTSGkMJpdodpICmo9gdhMAar+WJwLRjDWdxaFujRhCcEyYuqEyWSbrIusBZATi0LdATDJFMeQJZBiAAGIAAAAAAACAAwAAAxFANPAIMAPuYsiGBODwycnkpTHlhDYmLIAAAAVJCfI0D5AUeTRBekoit0a4x9AGazkrXJZbyVgOSS4EgAAEyQgEAxAAAAAAAAwQCAkGRBkCSYmxZAAyCe4hgTb2IDyIBAMAEAAAAAYAQDABAMAFgMDABYGAYAAAAAAAAABgIYDAQ8APACwSSBImkQJROjTGL6TVUniV2rff8AiMFj/wD1IxqJ0dBpY26O9WTjXKPrq73hSeGml+2H+xK1GPWXz1+tVdEXKMfTBLz7v/zweg+Gulw1ek1tOtsUNPSvmRui/TGyXpUc+W9ml57fYz9F+F+o3a3EIOqMcOVrSaS5WPfKTf4R1eqaWPUuoz6RprZV9L0Cduruct5PiTb8yb9K/Hsc++v5Hbx8W+8cqnUuGhlprkmlY5wltlbKLT+zwvxg5Goj225W64+/udXqfUZdS1nzHVXVGMVCKrik2lw5f80n5ZjrVXfGMk5x7syXG3L38bHSbntz6+8TprdGi75bSs9X7eP8ldTxCb98I1dU1cb7XJQValxCPEV4S/Cwv2MkH/BT93k0ysj9LYovkE8V/kjB8/gokmSTK0yWSCty7LMkrdmpLyRt5yCfdVj2AhqJZjCXnJk6hHPZNfhl97xWn7Mrvat0r91uKRzn7M11x7NPFry8syvc21JSpSfsZjVJcfYpnXiXpWxKSlU/dEXbJrwNSRKHojvyW1vKMrbfLJU2NWY8CUsapFLrblvwXS4Iyz27GmFM2uEVknyIBAMQCAYAUJ4Jd5EDCn3DUiIAWxkTTM6eCamXRcmPJV3i7y6Lshkp+YHzBovyCZR8wnGeRqYuyBBSDuKJjK+4kpDRICLkLvGiYCTyMBYyJwTJABW6yLrLQGGqe1oW6LW8EXJEVBMeSLkvAKQEwyJSTJduQFkA7MBhoAAWcB3BEhAmBQwATYDAE8gAAAAAA3gSkgGSaWCGUPIDAAAlD6ka+5Ksxrks7ngIjY8shglJEQowAAUAhiIAAIuQDAE8kZSwBJvAu5FTk2LcmmLu9DTyUbkotoauLgK+9klLKLqYkAu5DABoQAMQxAAABQAAiBiGLIAAAAAGH7DxJ+AEBJVzfgnHTyfIFQZNC0vuya00EBkA3KmC/lH2pcRQw1jhW5yxhr7kp04eIyTfsat1wkUXXqtrMU3/AIAjHTTb3wkTekl4a/cnVqYTXKX5JPU1p47lkCtaPbeX9hrR/wDv/sV267tliGGvcp/W292U/wBibBpenrjJRlbhv3RJ6auPNyX5KVrO+L74pSXDwZ5KcpZ3l91uNXGv5S7trYY9y+FEGt5t/jYwwptl/I8GqmmyMt1hfkK2UUVxbaSb+7Oz8PfDOt+J+pLT6d9lcVmy1r01x/1b8LyV/Dvw7q+v61afT4hCO9tsl6a1/q/ZeT6zXo6PhzoEdHopS0tU3hSeP1Gqse23tl+fC8I5d956dOOLfbB1WrRfCfw9DS9PgoxrWI7Zlba/5n7tYT/ZLg+Ty11tL1GjcmlOcZ2e8mljD/DbOn8Z/Ectf1KGm01klptGuyDg8Zkt3JP2zw/OM+Ty8bHLUZm3JvKbfL85Zjx857r0ddZxkbXZ7by/wLKjDCe832/tyyCfhFdksamv2PS8a3Uzbm39ic/TVCK9iqz1Tx9yUpd18V4QRbJ4SQoibywTKBPceSDe487BRPeIVPkWcrBGt4yEV6v/AIUiineiRdqHmDIQXbQQc+W0ma9NP04Mtm1jJ0yxIzPtr7ja0nyVToT3WxKFqe3kln2NM7YySpmidVTUssvYZJi6U3hEYTzsyM5EM4YRKawyDRNvKIPkqEJjEAAAAUAAGFAAAAAAAAAAAAAANPAgAmrA+YQAaJ95OMykE2i6NDlsVue5DuYsjRprkWZyZoSwWxmWUWAR7hOwuosArVhJSGhuKZF15JdwdwFTpE6i7KHsyYMrg0NTaNDSwZ5rDGKHaHzSDIkFq9YOqRCE+1muuakhPYzpSXI1I1OMWQdcWXE1VlNFc3vsX/KQfJTCqYyJ5LFSkEopIIrzkeGLvxwS+Z9gIOLI9m/Ja05LYI0yCqnFx8i72jT8nPI/06GCiDbJ4fsXxgo+CeF7FRmUZew+2ZpQxgyuE2Hy5GrYBgy/LkHy5GkC4MvZIXZL2NYYJgyqD9iE4NcI24QmovkYMUU/YThJvg2dsfGB4GGs0a9t0P5a9jRgMDDVHal4F6c8F/amJwQwUSUSucdtjV8pD+XEYMHqRNOXsbPlR9gVcfYmLrKu5klGTNSikPCLiM8a2+SapyTsmoxeDJHUSU/sPoa/06D5ESULO9EslRD5MQ+TEkAVF1REqokm0LuivJAfLiHy4+wfMj7h8xFD7IrwPCXgSmPIDIyb8Cm3jYpcbH5AjPUyrlh7h+tfsRlppSeckP0szPsaIa1PknLUxxs1kyrSTfsTWjfljaehLWSw0sZKpW/N+pbmhaKPltlkNNXDxl/cZRz2mmNQm+It/sdRVwX8qHsvA+Jrmx0tsuIMtWgsay3FP2NrsUSLt3xh7/YZBnj0+X800vwaKdPCjOG2/cmh5LglnbY6vw90LV9f6hGihYgt5zxtBf8AngwaLSWa7V16elZnN4Tfj3b+yPVa74ip6F03/wBI6DLEuNRql9U5eVF/6/sidW/Ub5n9r1Wr+Iek/A3T107pkYanWxXq8xjL3m1y/sv7HgOofE3UNbqbNZqNRKV2H2vjtymtl4xlnIc3J5byyFz/AIMvwZnjkavk31GCUstkVN9yl5zkG8tkc7EdNdBNNJrhlN79aa8CpszDHsKc05NPybcLMq+E1OzK4SyOt5nKX7FNPprb9y2r6fyVFqYJkGxp7FA2PJHIZAM4ZFy7Z/kbZVa+GiCN08YXuOW1RV9c02WWvFeCDFavJGDxJFklmJUStRbY8TUl5Lars7PkobzFEU8PYauNkpYIuzJGE+6OGCjuVjCe4uSeNySigFj0kHFlrFgqK+0XaW4FgCvtAn2gBjAAMKAAAAAAAAAAAAAAAAAAAAAAAAAAAGpNCACXeyLbYAA1JomrMFYAWOwPmlYF0WfNH80qAaLlbkeUygMsaLmlgpfI+5iACcJuLIAQaozbRLEmZ67O17mqE00agXayaiJywLvKifaJwTK3bgj84aLPlxH2xRWp5JJZ8gTWB5IYwKTwgLcoTnFeTHK1pkHZJ+SaY3K6JNTT8nN7n7klbJDTG52xQlamYHJsFNrgauNVmocXsL9VsZnJvkRNGpahy4B3yRlTwyfzGxovWpa5Jx1KfJl70QyNMdD5qa2ZBt+5jUmvI/mS9y6mNSbXkmrUuTF82Qu5+41XQ+ZFj717nPU5LyHzZe4+SY6Pcn5DJhjc0XQvT5LpjRkCtWJidsV5GiwZmlqEuCt6hsaY2OSQu9e5id0mRdjfkmmNspRaMlsEnlEO5+4m2+SWququcNmX/qlgxANGt6og75Pgz5La2sjRPumxqEmXQ7cDlNRRcFSqZYoY5KpajDIPUZQGjuSH86PuYZTbZHLGmOkrIsfcjnKcl5JK6SHyTHQyg7l7mD50mWwi5LOS6Y1dyDuSK4xwDhnlgT+ZFeSSaZWqkiSWOCiaYyCYOSSAlhPkZQ9TFPAp6pKOxNg0Bgw/rJZGtW2NMdSjVT08Jqp9sprtclyl5SK0yiueUmWJhVuSM94SX2FkWSowcMPI7V22P8kGzm7SpRl2sdku7DRBsipYYTqa1p4qS9y+t4iZk89pcnhG45pye5JPYrzlk87FBkGxZE2RDyVyeUPJVZLDClD6x3PbBGHOWKbyyCvwVPktKntIlagGIZGgnhl0J5KCdaeSxmrm8LI65ZITedkQc+wrLS2iLkkZvnNicpDUxodiRF2ozOTYsjVxe7wKAJoAACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACcLHEgAFzt2I/NKwGiTm2RyAASVjRYr8IpAaLXexO5srAugbywACAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB9z9xZb5AABLLLVTlFaeGWfO2KITh2kSUpuREgAAAAAAABPAABNWtClY2RAaAAHFZYAot8DcGi2OESe6LgzATnBkCALqrcIpGk/AG6E+5E8mOE3FA7nk1qY2OWCuV6RR3ymiDrk2NMXO/PkUpuUeSj5ckWQhJ8kVUudyeUy5Ux8j+XFFw1mcW3sNUyNHpiQleuETBdU8RwWqRlqsyXqRRapB3FaYNlRXqI53KMmmTysGWS7WZrfNGSINiI00xeIovjJNGaL9BKt7GnNoT3JZK4slkolkTYsibCBsrsJNkJsBRexGTDJFsihFM/qLSqb3JVCYyKY8kalMmpNLYrJd2EEqee1blMnlhKTkIrITwyxzTRWBAAAAAAAH//2Q==" />
These are 2-bit quantized versions of [Aurora Nights 70B v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0).
The importance matrix was trained for 50K tokens (100 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
Artefact2/WinterGoddess-1.4x-70B-L2-GGUF
|
Artefact2
| 2024-01-20T14:48:48Z
| 5
| 1
| null |
[
"gguf",
"en",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T11:20:26Z
|
---
license: cc-by-nc-4.0
language:
- en
---
<img src="data:image/webp;base64,UklGRsiSAABXRUJQVlA4WAoAAAAAAAAA/wIAfwEAVlA4IKqSAACQlAKdASoAA4ABPm0wk0ekIqahp5M84NANiU3IULqY+CG8qOCFSeCnQ9T09XDHPyT2l9K7kPvQ+8fgP337Uu8vtnzxn1//D6z/5//yfYP5+v+69DP7p+sP/5vXj/hfUS/qP/g6330TemD/s3/t9M/r++fX5h+lX5D/R/8LxJ86/yH+G/eH/Ee6jo3tQ/nP5w/r/4z1I/9Hjz9BtRf8w/rH++4gXqPMp8JedP+r6I/vP/A9gzh96Bfk7/8XmZ+yPYT/X/rvGsqwKgY4CzvI1c5dBg/d+SIkAJvsKqP9v7Ixv4/rL9i5OsmxbJjzkNDlle0k42WLwvxpkkkAkoB4ygBIfBUgUcPJ28QKgaEazF//uN90T8xQed1199haSaW5b5W2lbsWm4eSlHNcavchxvsGZ6VGKi8kKrt+KL3CXASxMuOYK2aebVCdoMEFBuQ6YRZWfTImwiBj/ihyiHhdUmge/ZJX03DL//6km8oWvrSldOJZPt40bnn9MzbvEy642VQzkAPqGhf/3l3FwmPIHwlgS7bV2mba9/aRHnhPJpIujLuyc/9jVTuJgzY8kmN2MnEUfZDRFHvxPOvhXSNEjxk7yvzvEwJZeJOf5TwycqSNpXEVPN7qLEyR8y/5J9naoLN72JH7keHOw8rXkNZY7KeD22We3YZbfU45eRqU1fQsn2EUMY99rpK2yYvwM9uBzEPHDKdPvN7l8VIvpe/EPsEgePn2O/RpSAi1Lt9z2ScqzSQlZIZCP3ldDcNvkkKzVqeRDWg1q+VvSJV1hE7VPnw9l/VkxaidZXc4jxJ4m21qG3xG/DCHq8xKeU6qen04yb8Uzfb2qyV6bvrdXQ4sSHiwNwjC6LRfC+ks8LFghZ1oGZJtsW8uKJemKCcXxADEToFPyUl+bf73CZ5jjs9AhaHF/gHllb0y0CozLYtMDYKcR7Io02wsUoLXp96bu2qNY6hsGjWpYP7I1dpFvYAOa9S3zxLZAD509Fov+fpPh7XpPl7b7kzQMfq0akLL2w0jDyHRbmINM1XLLfCVrNJpKAMP1sMVEQOg2kMkdWzDLsLQQyBxrKH9+oFl0/A/serraIoHQoIE5IAgC/6dPZ/bARTnn/T7v02Vb1z5/q65PdE+wBNon93VAWW94tz3Z0it5ejunH6CaEzD5a8yVNoatmVCEietc4NGmXhGK4AWTCfOYJJAKwzeKhPgC/5EVO3a5rH39opXULN5s5fjbYBLimkY52fJcRjCkUq6bEtO6aVUtK3gIefhhZ+rpaFwzglrOzrxyVlL3Icq8lk9C8EzMH+tsPMuRkoZpVeNzy+SkxCJDRrWuzrRR27iryp01sLSYtOlPMf/SLWKCzvH5pFXGuPzGWr3/ynCXPpAoT5IT9aPwiYEw2aiUHzg+44zXWw8pNCW1g1k3cU/kv8fL5AJQN+cW7pYlIwbVoO/nQPvfUeeVGni788/ldml25kZSEyFGB6r+HqVzxwE+09nzzHPOWdoLlrS4TXYEcJXzYnXDPgb1S+IIJ7XyskNgLVoco3ScLqqaWNXBV2AoqXbX6c2oWvVG8nSKd+AzsGXNvEdsgWHf2q4g1Ml+WJ7uNhI4fkRgBSYm/ZIUcWbv482IYM0rU9IAS6A2dn+iyVIc5ph+jlNHFY9NLLOvwvpOcDymbe0+8yrHWw8T6SzAaC61HS3lLixFhTx0MwEAeyAMQZlFli/PEZFxp7SrlsvWbY/xpayEFZc2rjgYKLJ9RFqpFcXYshdJ/4nT5oluGwk5eHXCzptQREwNc+7YlaHReVvSI9w5hG2d+GVuvrnPvJK5BnQwDnkQkJT2dON077SpmYJHZ9YonFqIXN+nNCfHCJcHZAlChzc0wkM6an0OD3+34vbZktDUuEjaObvYvhv+2nvuoKDCQOdcaktU9/013zlTtExZcnd506sD8uTJSM8kld0BzWNTEHBqeWaWYBkoP03+eGRjUGtquzM0yUbdg2WgmvG6my0Jnfq7AmQ4s5qDXmkC+tOVKb6zHEOET/gGq72MIJN7F8/99gQ9jh6kLPXmOIKgarujlBuWD/lBozXfDaB3AdL/I/RAg1/0KvDkDKvAtPOBSvQ68Osofab69BX0ZtZp0Arxh87L5R+nz5PinpE8No10ey1Yuw2j/2XRNxL/903frXgnu70GO7k/asxiVg4ZDTYe/VW7zTzLM2k1UnD/b7j8K3kvy5KyQ7T0Nna+PJxI9GebNHb5T6NJffXebGn/oQcrzuQ64dlc43t6rAXHhOj//rkWfQOIiGl36/yIZbdVelAhHeSFhJWmAv9hxMCdRyohy1vkn7M3ufTk3wBszMQ7JofnYvEfEypy3C2y+nT1dGv3tSDB2RDaRZ4l4vTMhIUjdFr/Ljo0+XCjOmbh6iaB43X9ZB2zSPI322cKT4fjSXPgUzt/XfV/Eh/KGAtLumrmHsoBZJdbCExHf4cSGMvBdiKk9OEIrSL9IU3TCtiyxhXBouqhg6lxMCV/JxqUg7OFZdG5aieEqu/r5AUG3P5UbjPdNzzv/6tA12whYwe+HMLzKHyKS3JvzD/YY3bcZwbObRZORbB7UqvW2x9Lch111m8DcJIKu7BbUM1k0r7AKbfMY35DI651sp+LEC3fJmrVmlswIw50VKG9xjjTUxKJRnh7oXzTnmIJkcwCdGJOyAW/x+7Pl8nqTrT9nPie6XTS3IX96aQLbfJGkLYOauqMXZNBhohU/RCbvD7Sj3kE7HZhW7eo4Tc6F7ojE9ORjYVn+4objhR5H4hgtRIjTe7vwa5Hxeah31I1H7yvjBggyXIGYr/4o0/+oIBOt0wyLsYMn2n/FC7BoVZ0G/Y2v6tqj9k5KEALi9pQ18pdMoFisnw1c9vIk+wtItfzUxmWH3tqx5LeMkx88yfgY6T9wDEeB1T13t5W3X7+KcfrHc0kW1+/UcAufqXf2KyaXsHEK4v5u5UTgJkaZ2R+366XZrBbKRmQShj4A3Rry1cpS9f23sISVoDrymwrQpcuaWcj588FOT7B0/0fCrBfKf+E3WdL5xrLF3vIWBI5jwl3fNx3U8D7MpnlyuYCk/NENlHIfdCa09BC4xYLlnNZbvK5HkgtB1EtDsTuqwTTXY1FqoZ3si5Gf/6NdMPWwuZEHuXMUjT2TxvRF5seazwfTQJ5wuom+Qc4YIrs0cLUJKGgS/K5CsHF9UY3XcGczxFP92i/4FnDXHJwvEIrr+9WShqgVPjYnnrUUNZwPKS31yq+FvD2IM+VnIEftnA80xLFEDPRHKKWiVEivNDE8CkUPdx2SoYdjwHEejw4W9/aVQ3JsToQfUex/na64QJnjliDAMUjtX4OOjFLa+0aqC76/KY8TeCfCjtALRvzlXjy2G5LgxzP110+8vyomRuAZzmlil9P/V/z3/wrHofNyvgAq6obaWnFwF/INBf8McbdOJkf/z9hBfeYmCAns6Vrc9InmoJIYZbLgU4QhSSL57ywxI4Tn6uZWe+bSU/wuMHp9hkcxizi8Gyrv/2e9OlgdiFEUvNqsvm4eEWu2YrvPAOul3syWKVy9suDIGpkRJGGyDYmqOwldfEE0x/60ctx0avZP/aIY7i/nOi/D21IRrNerLVsMnp5GuZSItkLm/sl6GM0hZMBS35EhUHTJ8FfCD8qSpPayHziXvk4zgbBMrEJ8qnU7TdPkb/p9IjKdzvNfmPRwU9syVNXHePNSQq9SUzWqsf+a6Nn/aB9AWzrIufgghDJdkNmGmJ9P/gkSWZr5sCIQw8Z09Ujk6OpYS6URnIRSZqA6ogI/gcbrxifSzzEtzceQrgqStmbX775QT///1Ir+XgHc82K9msdd5Hk9U8wpDsfbJue5gKh7EcdAEvzN5/awWczXYm32RvNt58EUO1XLZyRnE2WSGmuHFEJApkGRJJJNB8fNddyDT1rq+oNj+kFt/1vuiY3qpRqbd8EJPLWc994p7ZhGGaKC9uN5aI0iuiFyFHnp1Vnf9ii6fcsJoSrl/4HIvY4xK3H5hxrQHbhy6ApW0EqiXSm9VIIj7460J564qIo17T+ZpdM56icFUzwR19W9armFkOmYgneLgzOyFo8rF6YeAEjCF/Dqu6xA+cyh4sYgbPlFdL3VXQDSGKpxp/DKPnLYuozZtdrUfNHWoi0SVcUMTNlwmud3O8Wr6KhANHBLCpyXRUyNEzZKFjNz+YnPgKCKigWpKW4TZz3Pnzd2g7DIhBVwsfyvO8mu/P+S53Is0unuhcdJGLQdtAsqwioXuPgUcJEt6r79HUb4at/pHDQ1bq59xnwOjDHwx6FYooG9o0nE7L05a5ob+Bivvjn9S03xLthHbqAiL3cr/oaVMkpOFP//1yDxVGUbpbFBudWd2xdYpf3zyul1HyJqjT0jz33wp5OIVBBa46bkpGMyMqr6kIiNgtyyFwT2cAH+/zCFbqoQig3DZXWQkvstIaim85GR6WsIowNUVkQtZTzT2FTZLTnm6GFqgr4koj8uiYunmEM6RPBV6kmwi2l1yIgDKsvx3DbqxCigPHThofO4UYtAmW8goC/YNRil5INMrISfssMUvsYFB0BFCCgTZJtDqrMnSJ50NEQMDv02+caR4LhzsLTj8iDeTroKTIL+PPosnRZn53JAUt3jvaRlZ2GAdKuxgR0IB08Gwg0GqySrCGmoyiMNNeSXnrQAGJ3DQO7MFJsea7mZe72uSjFxtX+pz+qilGOGn3U9ckff7KZ4b2VyakYvL1wlL0LNWdxtKqJ5yRXasRrJRdy2EgzlzfteGOTTlUCAI3dGmkC6ooUwlHzRIfjC2qFyoXRo4VXGvuu8/C60q4EzkCQ+olRi72kcIPm+rHtvuYoC5O8x1CGkInplKKW2ATkCeRqMjCjNpKDwIq7kNdAhilcdiOju9o40lggN710f4V3ysKgsdt0EzKyIeZ08S05Fz2Q5YdlUydtTRdVm5EdgdJVLsvUppQvkcjHY1lqPLeRolyqFq7NSmYMhg7dxGase8U4U6SFClkw0Fqt8GLQGAUNX41RSweYjtqUIc9eGmFLX9KxfZ2tZ+axzwQnGynvresdDm7hcbf+OVLdOaNLOzm/BD3DNWmeLrQgOnoNl4297FxEQSmwfqcb5ldsC/1w21v06095FkkW2LIiPuKIVff6YORMm+ua7bxp4CR4DF5eBGiPPHYHVOLM1mG5WGO+Z9Xwj0fxYpaLh/Fcvau7ZCInMaATJ8aXOW9HC6xzHzm3nxpe87KS5dea6zSf9+74t16zmLty3ZzXFXtKCAf4haCld3ekHwlityCwp/JIupLKOVXgb3VgOz3Icgm8Ty5s7lGeMiTOkqaztrRqmjpU677a8XGl7oGXsxwE/D+89s1QgNi8qMxKgzzx5TOMqMxZpGZgeo2P2j6a5KU0vtZcj3LDJ6cctJM8b84kwJTxtFRjy5fAOhYDFw4cakKVdGbOl5cBrGOAKU28F8OqWMYHXUix26eaJXaRNtkPcfYcOm5MbexX4qMh6aA8neq62Hvu0+e3W1t4I3uWjlXsZMyXlI8B7wv0avJlt98D7riIfd6NP5b6aO3UXKth0IYwq+Wj++HubaRP3/UKnbNLkCm5u025XNopejJZahOvs/BGZ4h/CmgzR86O8WbCvyUDTIHAamw0Cc0sd8QT/CWVEC+U5QbH1w0QhWWJfK/epxrvzES/ZuWMXysGPbhRDsOoaETBlK6G2In/Ry2NMG1IkxjHausEd/MYoYThxXGwUnXDftB06mJHi6VNEoptuRF5JQNIDhplSINlmcSz58joJRKLfyXSG1GxQvF3x1akWnHZUIQ8uUfMo4kMJYZz9t/m2lviPCJD98l4FkD7i+4RUN81Lg0n9+bnk672jFyFyEfRblQGZTNBQx+U/qMtVkkHgfjgRdWx6wO86jIFWttsKTKZx/9a3YrYP13uP/PRVI7cyxIj7zQ+yuT6oV9Nkqe9eTheH6UAyaB5XnBVLXKYaiIBA59ZVXe2SniLYSE+PMupeYDObHLO9p8/YPR+Fq4u2Cy1yOMEwaveaJdUZtTtmMkeU7f52jcKmhNTsDcbcTjR9vo145/Tg8JOokaonOdzBiFi1DXU+3c32H9OC6Ad7M+5TLH8wbEsi22EbdLcdYnmPxunL1xpsMz2VVlQoCOq2lPMZ9lJ1ghhoXUjdBTWH0bU1KhcFQKI46ZZtOfnq6JY7dSIdQeo+B+fW+yrC/+aKUsy1FSpB1UoHQE7P40Z2N2gYvVsudn3GqQD3hW3PcZcDSxFUaduRXPkXr8iUEF4HEBTVaPeK2t69/2qXHKp/Mo7avbn7NGG1bua0dX6R+/ZVL+L5JIwqRSHtxuMGarCa/HTwq4e682QwraXqBgR+MKtOIg17Xx8ECJcRFqclg4j3cFgHVe8VONTyODHnq1a/+rkB9U8HbLqfcL4KZ6KRU+U9aH+bsVsT5vgScLPbuRT8S25/dnM941qQt6obr1yE3mkPFxsJLD1TV2V7bt6JDPrWPbigaL0jD5n/4SkRXxOogk+Pz6mz/wxWHwu4Tp+iQTfuCU8IREoXq8BGYBWI4FubAM5HJpeqkcrRqG2nrGSscHsunxjpke8x7//wZsjMD9wZQjc0P7eKJMtWX6ihb3N6xWfHzUKUsupcKPD5CH4hr1bKy13QuKYUQO+jLMzF2lnjUkgnypHUKGd0x8x8GLAfKCOlm/8NxDXwwhFBgEjZxJD1t9ctW8huGTDtLAIUH47SuPxuSENI4wmfQ/1RPz9oPVW4Bm/mQyEc+juubxcPfZY6TorK2V2YDYHgDtwBfTGpXAOCUTAuzIKcs9j9wbaAn6PhTdBeR9WD0uU4o21wbBYVcFjYaNqbbPsdEmHAy1IHtKDGbUpv9v78DQI3gA/npxfj5tQELz5tZ9pGBWyu+7lB2ordLM73zU9UZ4RBXQBnFtBWIThWx9aMY5ZEkYGuYsvQuumrxS2yRTlXMtT3USUsTjbcYzOGJaBUSHNSXXl+l8g127a1hrxSfn9SVBxh49QAwzsQHGA/28hPYr7QreiHCzQr2g5nH2uiIet87cxZrzjf+5rhy7VFeWwxdjjvATSJJ4AP745Q4TxUlPcgnGnuXPJGZ3bZ7Ap1gR6Pa+mNY+IxYz3svUo1eFlwslCiRTNDR76zHtVD7qzEGXLanng4fB6eeZmidIcIT4ofQZpW/ioXrUivC/cPn+aco9733/vtFOCUS5ZxyXGEBZfCiA308ZgTJTUjFF4s8nWZ1sY/NJ1npFYrn0IzVUmij7CASkSuSlm5mhmf2MPwIrq+P3SmLRpIOVSPSWk+7gLit9GfhwQ37hiSFTU1/LX23xCH9VwGHOFxBENJ7yp+e7bz7oPP52QgiGv+sJTUWA5l1k68CS48naiBZcHbvNd6r1kr5MSvY8zyG0uQzTbRXfkk6DmPx+9w7LqBUseW8n2Kug+sb70eDsQ40ZaimgyEL5YMAWTgXO0N3vUBs34bicky5LmxiiTsdzI4O2eaV3EXttw+J5hA0jGePM2JWG4dLMjuKxWua8U3RFZRz5/MQqG4m48QlK5n6SNDTSAnrbIE6yg5Ap8rLVX/6376SYg/uwnf3w2DlSF3SVAZjs7SReC3nCpSLJx8M75m62UKlRGg8ZTBa+RgvxO3gTIejwnUJ3VKQZaCVPcLwhNt0SiXAw+pxOeum5gQ69BtKE4NF0fAqLMlNf67e+JuQSBBUYKe8dwBN/qALV64kjKjC64mYiDYeJimBGCXNbvn7OAABYNWD2Xlh2kY8VHOkhjWQlruTanQyRRdeG7tLjTa/dUr6Ptbvvg2oiHYyKXRszeUXvi2kIbQpW2N7CpLcUS8Mo6nYI9a9GGh5lr2hH9wBwvjP8IBGqZnOJOPfaxGlLT+TaYk3enIP/RwzkwmM6eXGY9sDLA0V4vqCPQWemLoafrD+nJfi+yTn6FGu1gk+lr3aJYV52J7bYW3p4HSVaBT6QtWy2MW0gMH7qkuJJWIKeiEJRXkHWravUb+OqTHuOELe/ey+JQ5eqZ3zjXT04tfl9MxeUaJvVvMW/hq7j7zeLhFEb2nxKd5oVXH/9HRttB0pJH8jF4BZatijbXaD/IervOTBPIBFtfR8lBUAo55KuEmZYg4O5ma0ar0K7klwJFh43x95Z6Fhm9xtpMAWl4FRglTtThpQqEd8AP07joj09sbFWUEjubDXJwXTUq+kUNVU3mc6Tym2xShLXJOCfoQJrvP/Kpf8OoT77HSGsir3tfobQ+3AX+RAGs4XfWnHjvK5WC1jiWzbZPtDT+tnwPeV5ipf/hQR9b2s6svyyPbKI32QZMwl9h3bd/mu4fYcCjHMh0ELtub0ocghwHyZdIPkQKpukt7MyVFtKPmxOs3RvLYII3McBThxitq+VP272r2snPcodkYY5iVX+cCPt88hwfEoWwy5SpHAUyAyPTkqKken/8dgah3KjUFWibWoyI46UG/QzVy//UhLAiN55TlmuHie53SfRl7x2C/ivHnxr+sQinUWu5WTwLmBP1nxHANo9wm7tswUNCfGC5XoUuw90ZXq57m+gP4gR+HM1G137xC64JU/YEJwpKzY6YYmN/cvxJCxPiD/cywyn2VgRi7uI9xkp0Q9WEZ/DPZE7qGpEw1HB6PvZq2IWyOehOl0WJQIlTSxgRxCxLU9SrR6OmAo1ol2ZSF1YzlF2wVrefolaY2aAznPeVogBQjdyF+U8jhEUnc5ryNLBuxfIuGsUj1yIypLV4rJIkvu0DBEcobQBBb5ZBcFF3IZPZKA7YOQJVdWRHshFZUvijCccwp7vHvN2lrJsTJrFm94PbKOBtDPLQPpPbgc3jXirxnpDN1vTucne0/IrWLAm3/wgKJZlAbBykS+/aAgtQZ5JiI98U277KGQHKS+NZCjaalMMx6TWvOgR8JfCNeyMn/QGeSCXZov2k9p/xpjzboZzcyp0oAbNdkzKH189/BqNHSfaM4xvk/S/eKLiwtNIiWrHc9SwZtJ39sDnxgSq11CZQPi0KMswixntFRYQvdJAyX3GlWi3NvMAojTsIokPx5aqjlbGBNjp9vdNjMoKfywpZtHYwzs+IjvY/+6x7YrlzEG/jDyNNxrdRyMLw4ErYaX0QGlMP0Igjo/JBQGMdMyL9w5CMrp8xXNjVybGZpIsuuFcIsP7UpBG+byQjHYeZINHYfFPBOfdhNWXoBTU97AVo5gv0qdUZV4Ee9TlObj6NdlKIc9ByHypne3g/cuexQPSEFBSshcgWJ0JTNIr3fsM+gSkaFyTMlJbpocsxUSstboGcEApp30GCYc+snir74h4sL5N7jklL7tjqAq489DCfNNZSLloSthDVKNDY1BRLBhBycDZpK6fq8oEheZAUS+eIVfHppLX/sFunFueWK2ADNCdCzWTo3KKA1SfBUTlo34qzTDGQXvPbNXmWiPOmyWhCmLGo+aGib1NlGXIqzAH0UAyCvfBh3xehCZGD+Xxb4fjFPKFSHnEkMMqLO08FBfSeZhsjzvQpb8m+A9NGY1XkCJYPGeVInEwDvFIDFRVCi18MbgqNerm4Y3DyL26VDROrEO0uT+7l5hHj5W99Y3LH0TMdIRKEveNuvP5EOSdspmp65J61JaS/Na6Z59c2Cf+aOXulSTImMjZs86oubwcyf5rRnVyEzBqhgAaQLAbgL8e4T8b2/Z69/FkZzpffpRgcLkY6UTvwvwgSYTmBkTymuje3hWNPDQQTuMoyrs+w7bwXMrEOYb3ri7z/0fQzzTnSMHA0HTFCbTcTgW/Xb/Rq+H2RMMzivpNcRpQngmMoQzWJ9lzBPM1rBlVbQTVgcAfLrdB0lZISXdRiD33kdPoBW9DHZr26lw1z/NsnnxKL5WN+COji+E0RMfQS1HYEL5q3X4DaJYnrDZQSw3meuxxTjzGTcAIRz1DM2z3Jppodin9TZ2gQaj9eHIINv0vEAkEeoJ0PWDmCK3uZ4avLnlf+T7vKPO/cdS3sXdeeW0lOmMzYIvwhmQFq8uyq2z6yF4D88ERx0bSxxYF5vSkkqR4id6R15lcdN2tAAasAvLdxz00pnu0fMiiCsQ4hrCVKMMT/BPTw9T62ul5tfaHMQJdujNX+6s5oici5S1Ss/SdsUgr28jPF+DE0SMWJ6vCzw9yRPS60w+AikrNaekI6i/rJ6jkfYCTHSNokAq2rHV81Z7ISIj7S5VYorRaNUI2Or3oPwJ3Pgh0oODQ2IOE01pGXWOInptnJIfch6IrRFJhDmJKWipFomt5xVBEY6Z4NBjhgXBXUUgwaKqfsar7yWbpHg0PboeFN7VFRv7xMJ1AEdbWfgg1R8A97SGtjuMz3TIoocDJBDkarKntKTKTyuWJ4CcO30d5hcC0lqe0LMSb7SrqlKJgTIkHwnWtFhQjdPJEo283InAeJw7C5DtplfanxX42SErrT5eIZxpEd0fu3oT3M0J4TyFvE64Pka8rsbV3rZt5vGd+nkDrDpR197b6qMQjlnQ5Ni3AwvY8I2j9/m2JPFtbEZE6XD5v/r3mKMi2y1indLiPxt4zuX5Q8cDqMhYFVmdeu1XbcwARHpwfSCw3N1LdMcJxwYpwnjmL+3n7t+N7HldY9DUf1sFHWXar14Z6Rv+NyafRN12TdaGXLpsQWqAfKeHcXV26XRQZ94rMzILrYAol7jKO5KKt8QbyQteeo/VZSGhvNUEjM9b9rFXAeHvffIh71nrQmwMfANefAdUyd1+FKUbb1+BIn07+OPG5+dHOPqMrxe0RB1dgfGeB8/mCgWk662+3wyz77rdWFcy7IY1mlkiLnSUoHJ1PWI3Nxoos26lAjCMVXpUEK3Wja2f/3dD1rYmV/LiGN7Txiydc421SZpUWlBF69ziDNn35Xjk9mxSuJS45zG8VXCDffNoB1Nu1vrwBIZplTlDpo/FILjvm33cAgUD14PudZ4mKDsB/pNbNW1DFMmC8Yx17o6eJmIbYOp8vr0gc3jcMR9aRmFAUBsUSavSTg/vZ4pOEheVKuxYoF2UfasQftEtaWOEcEvlJ3+r2KdDJq2KNbldawtnfyI8Ef28Wm2rBSztzWaVVHPxmQ0HU/gsOu1PK7xTYXVQ81NifmFE6YgGn0AyS9VOkpAx8DfMuq3ZqgUaGTJN5TBrEMGbVuKS0icZywYzrzEv//sEVLbkJepB6v9ZIHNM/LFdIkb0ewoc0fvGqfzlMntGceDLMWqJw0oQv7tKXQOau/wbYwMYHaGMHIF2AKC8xsNHyvY3r7aI4jKqJFc/vLzFxfApElRk2G3HRPoMoe3THYZp6nX5X4W7Z9qsP1S34fKb1N57KupRRn+6FqpGxeWY7dk4qA7lQJMG6Hd4qyhv4l6SuwJVGtJh+94bQAyyZ1Agb7I4GjSNxMSKcptT7bo+LAaFKSMIWUFmoDH6ATsQOoFDS/468ULq+oNMfyhqfoKp0UjBpkEvgoiZoReRCYFxW9eekbZIeSWOZ4Vt6O9LGpLmuHmoKRy97BBSUPaL5IApQ8DCsctf8S98+J5qL8WpwryZ4ENFejqxt9kcKhW7Ql1UyDXe+n8lwQd9ReRV5sEZjY+lz33vyO98sSt8L4rDnuxHV/p25B0hOgQBmywVVN5ZhoQRGwos9qUpg++jKeHzO8J2Zb+5AGZS7ji+61lSo3e8Cqnodx1OJCdgSzrgkiAyUfdLhB9VuEZ9UjAalZfK46IiWS6I5IqpRq34iiAWqwSaXizo1Q+0Fx3mFnglhrES/6Z8U2L+eiHwh5UoIgRzc5OiSFCQRTK4jAJu5NclmOqaaUw2uaeD2dYIa7AwkC0E07yryhFdtYQqym2FUlzk4RBW9rGpr7PJeebQ5woyYYSb26n0A6ezHq3uR1Tcd7BYi5f6NIJbmXD1I+MtkBlSy3oIy+hjmWNjH0OdT0BjT4FiFvwiCFd9nLpqhD74soH4bBj08FuNCCIIKqKUTC4OYFJ4G3m3zvNFHQFHOk2gt64gFY01oJahd+Wl2uwKSWgyci37MaNnnjyNIuhscuSxS0dQ05Y8cEqVSQLixGwyZ4l32+TLZhcnKN9H5mfcWyoxEwPePYU9/9nh885gNfL8jSUAqmwbP8rvxwur1/aFf205N/vRr7SVjXuyZzxq+WxzAgP1J0OZPtltyAc1HiMIhQsKnEILPe+b0XuxaXFmBtj1yuw+SURkjh/rAmwcYQJNFBNekxn6jzxbZ7F91WVT6Df24TTnUfnccGi7rKBcWMYOiY5TfqyquEdlTU/LfaQlnlaSD9yz6cTFBOsPR5Y1iHpGIQeO/Mz7/toK0PnFhW6ntqsnYgKD+VsO3zqJC2kA/YtO6Q+aaevivDvVZ1to34jxvxrsWekKL2++JSY2gmoWml9amjKKGq0AUI/SRWRKQVz9sTqNlpNOW10btCjGNE/fCcglptgBTMKqGOS3cRHdZs5FaGL0/CYERZuNmUuXM7A2qZyNBZAm7BpQgaJSIl9RATKn6M6XG4JeRLdGWGQDm/ktc8P6tpL6GF5LUXN6ekjQNXxXNv1HwkgxOwNSpWHPG4DvveZzpIdaXOeLEslVqLohY491QbovxwzgDIKmQgBZbjzyDg1RrUIUqWKK6dlYwGDen//WcKRoRxbTGVfCv6evmdwM6diKlF33O1vVaO+VMYb8OhmxBaYyg5mt2yjnmRruCdS9U5a+O0NqmB+SfBa09iFT6uzQUvIP+xvpYyAMIkfEIymziwi5TlESxiYJInZifEYB/sglLUbpCibYmetEjpRMFD+F5edvJBHECLFaqYuxkku+gPsdYdicFxEuI6DJYaZhzwrhqYKgcmk0YJSvy/yP5ZF3M/+1zBFKzapVnP5dnlhQ83QJ1w16rf+dD2RN50KaElIA7bhTBLPIP8EGQANgAwFtJOjiTCzrZi8qu9ZSx1s9VpHmRAvNaRMGCQZu7gkICU/Hjd58pWKtaSAS8FUspTL+z0gmCdeU/yhejciOeQvzxOf0RgDukuwERLdv5vlfi0OltA8fbWTiuc+nC9qjUl954wynEbkyCUgV0QOh0dGg+N5RJVp+EXQXNR+a2Y5goovFRliGi1hIp+xSRVDuY+V6sk+H1yMVyMNjrhSPAToFAYY7aDvkD9p++K4oQG1yLnUEUdbI4fqx4KrkiXbHEJaMWKmcak2T5RkndASCliMG9MZrrWkTgxLPrWFag2KQ63b5DkS7lq8p7Wrds9tKq/a1hTLhyw2Nms7a1FXX1T6Y2mDvvYDsIjNMGL67x0SSRg5OnxOoh8z//iJWIsNf+onOB2tZ7YmHz6GHMxJGvh9gwOecIMYeO/IdRLgCACKUFlToekiMsspB6jqv0ABQi0+SbSHA2dunqizV/xNNv1ufnHtuheds7bhgd9EK/F/W/mrW+5lcAVkBs8VaGsdhQscLcRP1C2sx+E1lRbUnps9ATwnqdbp8tkN6fExzfmr1TylSxU6orn9/pOWvPjLbfygeSrM82kjmiBhygQ2n1EHPuUxM0LMR75PtNcmhtATn7R0Gwfgbnn/H9eV3Amqlf7ckFI0iHoF3lut6MDbWWkJXGLZ3XrKxsjcz1AJCcaShbaZyHDB7zxwYPNo6XXAzuJ2ZCAVdCNIkwpA1iAl2wSluLleeotcloaRuaNzOzkJVc+ojoRhtvhFfJZd55NU0HBii4l5/Jx7iS1ZvLpv3qK59qWjDrUm/KdkxJjk++tJQYqetuIIGoZEYb9N5BrtKLG5fvEPgkap89y/TPaPwKcgpGL8cXtgWIFy/oRIQChw7YajNdUV2NJVjE+YEVDybwLjJsH+PETY64uFgzl4a8vPnLQ3fxqzsBk7iqIcpstjh9iTbmhohOJd0AZUAoOXoW9xblZEeb0RnqKD48r5bjPzOAVTev2E2dXuE8AImJJ/b26UKzwlexytqsr6Ig96+4T7irucWKwc8k5H/t8xWYStML/G4Eu5/oN/KDBgSQgh1TOUs9qAtHwBSei/1dV5MEV2HEvzJK9Y0C14TawevMVM6LX2iC8d1CfYSU5PvQMAYGMQ1F5s0V7fKc5SwoiOUU+2WsMVdl54i1TpCMVkyXtAgOX0tcx3bmWBCNjBzTX97v5I+FmvG9U5i8k+8gCkwJTwL96rWzRl4p/AZG/ONOLp7H3FXeFXBntec6hKCkPyfkV4VlLJpSj4h3PcK9y7fmEQQeYOZDACcRHE4GwVESUpyxgQatBDsSgCi6DGNCnv/8N/sbzddgBEz2btk/WEXsfYboY7dX5haG2uKXXe7DJxRkTkJPyd+KvxFKUpRELGNP4KrPu4qSTsbx4P2BlGIRm2YTNwMqRjoRFRhwnL5wnIA+A78Htq7A7FgbC341v77UWipzBZ3oiVbJ942Qq4tQ5sszH+e6AKRe1XS8ruHpZD9qP+RzYlqyPtOecqE3g1uw76JBks3A+OibKwBLBdr3GDuFskAx3b5Nq73mTp013y60XwjZYC8nFV9wpNbPT2MsmgICEPxAbxUaVSEklf3un0XQ6VMNcPl57j0UE86vXOqDHL9AUBbHmA0rfNUsUzTb02A3/QkthPL7W7aJUAd+dbYFScUAhEIoJDWFmvXTIs314yRf2yKNNjZGYkwXq3ujheF5M4UuZdsoQVDay/nD/YGyYKxjTgmknNfsSOZJw6wwg5T7phk4JZht7i8ti3k/soqTWauA+mhruqjzbGW/i5irSQGRGpK2hXV9rYtRgpPqPKBRg/UBiKHyhN33cPV9mQcOT/syhrYuPjLrLesixhq3OEr2Z7QTxKzb2T8cNTXicqr1cpDMLu8iBx9qtPEuL0wok+2CnJKNIIYVJtccoFyRJVEMYdQ6YLfCfF3Nr/mpTvUjsOWrGb2FEvTjPIKsz74A6pZKYOcqTwq1iF3FGQZde5s3LKtbWbIZf1KLiAKQinEC9W/R3jA4fs2ph8f7v9ZXVVUL8V46xZs6e1VBwMWikXxLTGnFY62IBVagoU45uCyi4PgNYTfbfcZk40em2snhg/kEkFBBtWtd9WdvHGGK4HNEJTHxfOOP7Cwl4E2IJP8fReGErlpvpaP4OrViUoXO8JccrVvbXBE66cKyY+/H+/77KLg+7s/1Wf9mO8bFOX9AwLZ3ks/bgK/aT/9+93RzmTVXm1DC63XhoAj17DOmDCENU6j23C3qXSndYbmUo+CSmxynes6Qza/tfi/3MyyIptBTHugMMFhOmH9AUHWeH2QkR9Xgv3owmauI8RcX6y7Cx9QC5pKtxAnW71F5UBVCJuU4DjS4GvGsjV232Q0bFBgE9FSbVJUlunwna+8FO4P9TN0ZrXOngKaJGJIMaY3q/PDRkBllWjK2gein8ss9D8DjQVS9LtbUrlVMZ2loXeQnojpCCw5H0hhUVW2anESRKv7tM2H6hTXXoaGLigkJqLQ3SXhDf6rvqHHZBxlPSRPBr81c1gr9wPExiFCXROlXNCgfXgnBbadi4zlzP40xXIDFD5zdDa9Ys26OxY+neTRODiBsmgDhKggjYbZ9Vw1sbcDFMgAybKgiZxN5tUeXayxvfshSlQdJut2gKjfazud2FeUNkLkUEi6k5c/mAXzNiwRnlXFyuSIQBqrHpQP5efgSV12bNuxcHX4IeUCEgk8itZnPwwdfEqz7zwjiI2o5SX8vcNTw+agLc2Wm9bc9bmC+VqqHrC5PSvjpS4LR3Vv9WSf3PG+Lk5HV9XrwbLL71no94fh6VOXefPaPEQL0ceCEYucqXXboEciYmhD5ZJDEDda9s3XlI+Mya3WXZq7mGFijr50c+i7D1aw9k8BjPIwXDt3hHdu/H/zScBnOUgZDHFn27a8mLKr45OIvpeBsF0Rw+OUajyS8rI8Otd1my2aL8+3zREUL00q37AyZYPI3oSH5JV5JxffPts8gK0rtKUzTS7W8oRVeOkNp5F+HqIrzDG45sBU2OJnaorae0uxJG87DwHi82L03IGu+OVFMIaY9qzONoH5jgv8Q1Q1MJGGwvMRmX8s/6bZMwcOkpre3bMphzYenug0WFK4lD9HQPyH6jPm9LP/MP2ifj1Yg66GXZJE4T0NMrVnkojlMmAWzc0LPoIY2ZwBRZajI+pIoe4ApDNTnQQaM1g6PFwvXXe2BO7DXFOl1iCFrDFeyXPVwd0MiokaxeuweYuzpQ+0ZMrXTHZZd9HXm7wTNWng/g5zOkW7cUC8zywJ1GSvkvplm0V/KopuQ32IwtZaG6N5A0F09K8XjiRiEXaAlEIqzrOkKoMwMD4gCYkLvRsSVaiukcjNkTI6xBEqYOcazjG6jtGvXuuznn3Ppb3pUKIR7FU/r4o7E9bnHzoGrxWIG5KPJJpXb1K9MajpayqF0gYH6Jsd+mjFcyuYaR1Z5rYyoTE9jKmtA67x9vHa7QryVaIg795ZXjblh/TPFozyorTOf2/WZo+f5R9KzlFSgFhJSOQbh46kV0bhMnaB5AZ1/D8/FzkvxlOQOT1jizG7iaTMNS2+woBdzcWZC01SivPCsvzp9dmFWfcxs5zvC6Do+NXROwlPkjhVgRt7P6e0vkkHe2YKYZ3PrK2/rfyTnEpgRfdq3GI4Db21navDyoWKVfEZSArhP2sSRUenYICDyDaObbTFftRSzBiiPz/WsL+Y8zmvyqCa7k819id2pPRRRCFWLZLDAWf9vC7cN+6/PV61tc5X1JDPz/uIxBmkRzTI9aFSO6tknmpJWfWKarMVK6KQ0hBF51c8ugUYD6swPiAIsVg9AY3KStYHYGleTXoI+rOBVPofsKtpiOGlWGv/6qjZC4gphPEYO3V7X9Eo2YqeEQJZXI918eCcTRwCFXhJZ1MS7eeOyBhN2m0cqwhfSCKXwTU2SdiK3ghWYBHreIS6REMSV9F5Mrcu7GCAcaOWYZjKFm0OlIN2XuFXykEUKvO1TdqtOJwFS0xLwNUqrmuTMwGIwtmQrrf7kpzCQMqDJwj6lEUBvm6M6/lW3PrJqp3YxaEfx7UgoTBois4Gq/lb8p5KhtqtcbQRgXLHw1VoE1pochCFH9lz6+I+WFJSmz8bzZTF01iwqcVo+yoWgl8xDCsAwuDTp0kB7k6JnxTr++95QTuSxSAunR8dAZkuSGUPioNxR4vsLSedhUnPFK/d0mQxQhXD13RY0wq/9Y+fMITofunifESPinu5t36UtZlw1OKv+K9m6y7OemO5pOgCFY5bmdAvPBaLlaljf/gQlj9nV6PmBGVqBnfdnMcw2ktlQZuk7TphH9271YySmuCkSfA4h4YU8lTB0q1OFe6BV+WRYHqxFgwIJqd/01M/bMvAp6QgXImcPj4u0PRrk+eVC6ypmHeCe9ARu5wuRDOxa22e2kdGmllB9VcuoG/baU5OBTb0lO5Z9dzfRkez7va77ZALlOccmOiTVbrpukBLdhV2Ho4lpoVL8EgLlgCBFgUEWwivwusV03v5FePx2nr0DAlHHlpRt+AyrV2/q7Otz+xAs9IUukwSI6LAf1jlb3Vl803yGLRgMa6nwDTcuRUgdLa7tMoyVLTruGIYd5mtZBTD4xQ8pqmBAmQ/HdpxRr9JgXkGyLg6dMm9zAH1aE9g2qENRjMPx/eeYs95EnNhcKPkKQqQoML8W7Bt5B9l8+vkyCQJBz700/tmCPxOmf5w5LvoKyxHMGw3jjGd6cv/oev564mskNMGrJxyr63JEfNaM6E+yfG/5iN7Cxfr3PrZW8R6kBnd7R4O2h1wLDakvwaB9rTHz1U1uTA12Gh2AirEpQZicSTkAZxm5i7RcAlUpZPAzggZHvwmPV28gxECfavZg/r63zLzRFwcAX1DCEDq/QLU5Hz6vuqCdKa64T0N8wX0MVhxGknoT1icdvQaDTH8mYhUmsRY3jzgtLeZWyBOKGT7zoR6OuXRfgKIXkA/h/ZeXgC5ZUD2UXLVYF5CDdCiikCPJ12aN3kWZnnktgGDtxFZW5t3tfBbdnkYH09k0JsxWDqb5hRrY2PH71rEjvsrv5cpPrQfH9GUqesBnXdft0SPQ7UNPEhhIm5Tf6+fdltwsXhs/98NjImG8smHJZCIGzybZUgEJb/vAsZRnIFpha03NP+SEdWx1+/W3ctDH4XWGpa80VTvIzJojnQC+m928sxMjlaPzGF9gOKnNa7QDREGPi5709kCyKTLuH/1YyrXk/cVfvtFt7dZxWlfZXAdIj7iLNyh7KyAbdMYBkPh8OJa+qrQifjzDtu/q1AUuFUf9/z6T4d72Gj8KZ1xv81aFSpxDGYexs2PHiV6ooJwjvFMOn0RLFhGui87Cq6ywYw1Q7p1FL5N533RxXmVqrxVsYvBESdNGGDEKBr6J4CqsjHiZxx+WQB2x/jKaPguX99pnOHO7eQVRMxVNMJ2ZqeILy8pke1s0gFzHdRkuzthY9FqXF0EbhrvmCFRJRp03IFAOiCYsAKG7Ymie/6jT9Pm3yjjpPnhYJ1z4DcYukA0fF3zQsHkTbwqPP4ZtNCxCO66wnI0X/gdvaTHkP1hg9nF5/cguKY5JHFc+MU9l0/3+vBrw7JoFOqCWSI4LIV4xXJTuGZbzzcxRqLq0qViyVaIQZIwOBy0U7ULnKWZZCaV3nTlWATwZPDUk2m0He9U/a3hUiE6GDkRvmoW5OKXX833O1PZKOd+HRYjJfFSOYtGnWy2OgIZZXXGM9SWOk3lcSSBrc+ZMVm5JEmS3ed6s1N0EIwhi9PC10Ykrwxyz4PyWYjLSYs/bpnPXW4d41ag/y8Qu8JTRZUxtACsrImSEaUX3E3wtowf+Hs/1dy+d0RZjZUDYlXx7wNaVnTlrGGmDhiQ4dDlvKZRXQdJaCCEUrfWkiIXHNmukYm2t3hQBdliw1ceelc032t7MbhC1lKYS0UdjeL41Ia1lP7FTZAQVnyo2HBemZXXJ4PXMsQwWNH/JlXNmAWE00QhWSSr8Yq7kyJUtZB3Fr0qdqF1M07jVEFS8iU0yAt93Y36b657HrvxS88dmSi3IdXcCP4/lGPfb00dvKYyXAlKrl4vTQgNb56aTEzNwc+xFL3MKLpaMSElayaONk6rSPF50+V7mfosJQqF3O4g/ioUUYUk6cc8IQYKWag0w1vEfve7MXwAI9amgIOJc1AKG57XvSm3u/+ID1dtY2yAWIAG481OTarA8xOKgw4bnfRYJEdxmHjDuTCyF326NK0EHwgLs6HNwa75U9CLHj7mvvmQIsqpmBUqoFzgfbFB5li+n+BwEoS/tt/475TqcwZVG++2B0wDkwd+tpbWp+/Hmy425eVcNejZgEA7YJYHDitJ3kgQTxQluOWn0GgFlitDNVbA4IDY/kb/G0nwcZ4cPbN8vF6hbsElt8DVqukyVbBaF7VxTNJ6oF78GeRxYExZ1FRbJDOuYr5iGFKfDnCHIFzeReyvusrXfL5zzQFliEb9obMR4J3NkqRVW4W7vIv6xB/28dJ6FGB77G0JfhNexuPE4E03fKWO0nkNddSDhEKkrveUE7Wp9AdssVcVxUwzO3AmcX1K9Ks/JwFGAqW+iuxeyeeC6YoaqlxpUSRrnzwXvafNKO49euP4RSmvQlU/RuWkngedEhbx3j3UdeWolcj8Jm76lgHDixylj248azvo2DHzo+S4vZjJL9tnmCkadoTYh7dGxb+ewJBlZHM5uWvXu03aMA9azK6kyd33Frl245QleQRRQZ5Tyta9+ZbrHlxle4XXiqmbd+qwd0ue7jGr4b58Gffph5VmuwnXxCZmxLBFZnCn2z5Ln22tyV9CD2m+tZHSx8koRTvXrFxZLml0oLJlt7z3DUKk1phYIu4qUh8YZNLHqcc6VoCXHJIw7xooJYMRTt0MZ9tOw5qbKSGAbPFj5c7UlfXWm2X86NrZgMnoS2gvCdoevky96s8oSOhq7y75cCznTkFmDPPsop6/JN3LfggWSNmqkQ3M1Ihv/xkaQ4o4FDZW4INV9vWJqLGOgOgSuX0+C0pjotldsa7K5I9bBf6LqC0UaBcSFmZS7aSH292eVBKgNUW6iddZ3szi4OlCC3WZr4emzyhT+LxMNyKHPR1qepO5aTb9Wyz/OhQdQKuapmDJfewbzPnON+ie0zRS+QZtJ9blKA80urfuwT/b3FBmM+GhOqrKbkkC3HIxG9K3EwUi/Med5L2HeXkfkFKbF9f/X6kURB32Lyavy+wU47nC+AZ2+8V6ONEmwXM5IFUTU5EXwQOODrnoovWH0nFY9+HZLLmCcoJLOxcmrUregHgs47D5ClhPcgOmc1jdIJVE5aeN4Tj8ThTrGwk3trok/Z8AUZoeZLBT1c+0NQExIgC0rWgRTLEyAZON4b67ce00TVt/7mRCG414IG+ruzXRj0EA5RjVyOiNEybLM12YpcUsYRdbL0XzeVLmwOmmrbZlrh+bhIl1+td9iJtkqzaHJhrDce1YNh29a0ARTETi4hIX9Fje+JHtizU0Cqu/vYZF8sWoP49eISIox6ngu5XrZeJF4Foi9a3pD9nsAACqv+ZuHhToN7UqVep0Clq0ScftqhwvuSDHSBOKnKeFGdc/k6jsfoqPci2245FfnVBRPNxIqX+asesn+9Ohwkpm+EZXEUcqBibNKwRyemJvzpcZlhWY1p8XnoQtzKzzZ4toy6DsQext3Xolm6ahrzbk84kONtE3KwPxLL/7M/eBzAm/bXEFHaepDEQeA8SeISlrdk+G5XSOdF1icumjylCxFSHJ9Ibv/P/QFm1NNB1oQeCtj4lGQ08pBeFDmU2wQOV3kRfDaK4/Bx4jVDU0moh2mf1yWYN83nkDBFFPh7qGX66w7frBXS9722Hda5ZCI4QrcTV0y88LL+V8tQU0HXI0RpURGVyHTRxIlDQirRW1ZOwSko68CW7bl9vLBpHtqbymlMdgrfIcQQMBLgfR3NaIqGJ0c3udHX7NQiPAvggJQq8BqTtINucloJpDeRVGYmZykGim9kpPkiKMIqzAZwPYJyUhfpTjLYQGhhfXfHVZ4wxElGswlc19KirvYVd8/L8xxDkq34n1b5yY1JMc4Yop0xLqXKtuzf8SsYM8K2TrK9iUtbZZbScMosbew8+QPCoGfLoBGqHvbX/Hu6qpgA2ljeNZ6HCISlIajEfHVDFWGStBTNb7AbIvuKswmcO5X6/fpTsWfKtzQ76RLuGQxJUVAg4uFx9jsAkblLE5pR/7MAxwt04iGL2DdBGp0m/5B/t4IkyzDOCZlP5xSei4bxHehl8H3X9jUymJtpVd7zRRilOAX6vhv51H2M2Srz86Y9OUdQ6K4sXlPUSHjCjkmzJXjbLF6ksgx3CDbW9NWH4/tvE7Q2r5Bdw9rvZpkpILyhHnJHzItwtQulFxjBPvH1sS23Fa2RA3JStGHb7749Tpe15Tt1uVjN3ZLv1FJQmbhzhbT0CQ65O+NmAwimYgiA+IDo3to8Iqzyzzxk6KtGQfVTmZqcucT3qFd65yBOvgMKCChX1fRMBdBJGgPGL1+MbdXz4PGG3Ifqqo85sWrOL0IbUnOip9Sjb9KavUsGjgp91H8iVgocmYFiTiDFJnUm1WfXtHvBENjVDmVtqYxicKO54OE82QwkZiQtfvsvIrog1nnpWtaRuxB6l4sA3d7HTzUU8NufLyFu/Vs/lkFolOJKR+7ObdRKFrbbFUw9649IllPoGSqcC+ZheMnraoFiCgB8r7b2vWvpd3b3bekNLX4l/RIjX6DyDpkIsttaYgiBMogQWxOzdZImj3FLl170cmpkyyjRZ09D/ypEF0VCezLLJ65ynZSH5ypY7J8fTaDRibtu4bKTKbjFDYiripjFNYiTmAEJeKZDtul/WZQB5NeQP1tA4bttnJZb/mL/AJHnZ7Tl5csVZE+cH05lIHkqMdhM32+fNW9baUdV42OzYoKdiQdJXQLUkwiBU53xg/us/f1hpix8TzQg8dg+gilVnQgQmMzBXmbpARUH+S+U6qR3kT8w8l8uYA1v/0piilpYawO5hwOE7/7mpxTvPwOWlKpZwYZH5SYDxhn5Ucu+Pp+aDOzyNgdm70O1eMgTHUjw9toM4Sfn2DNIjz+UM/MRVprGfd8U+JG4Sb6QGncRFvB26DREgoitIhYj/kkRusYtRB0erGutJLI1Ri94M1qgkbNGSs6PMIWO+KFH/+f61kYQXvbTMZ+2BPw3DCwccxoQZ6oCUr1wzMSAN27QzxlX1QoBrAFH5OAw6fAlELMzZSLNVkyWOc0c4Nxrqdav+/4YLdHDuvBI6NxgFxFIXhnJrT1vWG9HV+UcTRFQ200u44otN9VLIz28KaQCK111N1KDKvaKPUekj+ojHDVjGE9fv0i3IrD3BuaPUI3s+d9WBnyeUuKDdo23Ka+1vE9/K1KNqwLncAg+ctk23VbSN7zYYsAJD6snu9wMPQiS+ZC0D5IiNJDZbNyi458qGJECA9D/ej9ZueKcaJ9IZzpWuuzMfpTCFgzcQTwVef54+cuSdwIJjI11mlU8XnD69WHUDo6uS3lQXqZn5ZqOmJvbnQLe/xgmvZ9L/XIguclwqIZqhcDBSJcxf0hlC1jSsBBrpRLtG2JEEvewdp/VO+NiUBS0HQESP4awH4fM03cYTQxpFQzhnwW+OmJ5BkD6MjLqq9Rz8QMMzJlRSwVWiTFo/f5xpU3AK2XPeg28zAE58x1fxfikfvJZBzR6dU/7mdNn9q958GKaJzEGncJXn6VI8+vrJYOc0nzapGf0cUjLF4Av2Vf1G3MTA+BSDeWP68ChwWhn51GLVIDz56hK7FhYXP68pSnClvHwWca2iYupN7qwUPAyFlRwojKA4RJTkwRfQOC6mQ1EtnP25fB8JxTcBtXD9ojX4McUh1uMMv2UglnpzaFNaJsXLf60M30J2lyaO6c6OFqk15Gu6bAYOUCaDS5dVjO9HeWE0Dhx0AgnMuI4X3H1SGmGACyJK3dcidf5JygNBEFFVr7TlBGEi2v5xyc/jVl43UHdz1lGgCkrKr1tloJB4CIdKiBGDukXJYreCkjXqiRi3uGRxjdK1qeg4puBaSFsu3gMt2pwebX31opuFiho7SZeZx+gaIrBD0EQsRBxS4Spy+Z8m95tyN6+YWDr3bwTszcRxZYuhHzSXV5yLThYVAyV6GF20CJYbpouZn5w5SS0DlYOZOa43Vo0xgNLYmXyrducP35h06vjcLS0VOTRhvA4jqCOo+t8IbEJBbNfs9VdKZsYr391yGZ/YV++CUpMEcfU2TSuk2fOXquKLHC7Zwvag6ADbpJpNfKFH2cDskWWui7AAWTFSy3ezAyJ7S9qQwrknraNzSYDiVU3Gc1JDU5XxykvMMmrz3aaFcfnAD2xp0JB1ca2dz5BiqPCOlQ14kKh0DY3aaxDV+XMoVKBeiKv3GoA30I/SheRdjrKRLSrDij1f9ZfWnXODcwZAcQrPxBHkh+MxTifT/QTJRgvc+Q7XQJX6UJ2jAj35INXuLa1ZVGS+65EAIii6q70OviUZFXyyCZ802tdgyvRLXPMwkL+UxpRO0ZpE7YosOOyQR2X88TJRaf5p4vSfZ+kn5++/ZzBZueMf6dR3q5TGIuFnir/2AA4ED0TnEBWI2pAbTRqfv6MauoMzEYk1bd6akhuSpo3SHT8ptJE5gSxTdsxCNHFExcb3VMBxBl2u6smgOIrMeDq7WUo4NTwdXNuBtX1lLI5ZmDyJUIZTxxejscssmwmypbZvFUoDERce+nK63uJL7CXp0aMOtLS+D1NZ6epe41HQQTy6le7PrRMi1Vsxx6MpRjO0pz+RUGx/3BqRthymLch2MJQIP6cSp+QZebIyoQJ+05CyyCW98o6RAcam0uxOrc7IC5dsgBkvBLOdCXDhcHs4RTbglmbS3phP+sTn++pS0HDlCuy61nDCYyw+XiISBFB80nNO5nwjcBNWRS8+9yyl5d6mR/PcxGpdQ5qUAVtUYIv2ZZ6AulcTqaMYEozyg0YcXAyVEmZzttCo5Oyg/BTJm7RmusGbnFtuH70QfNnuVg4179rgoQUMJ+iY/NyWoHV0bnYBTT1jAaL4ZlskeMHUaP0AN6ycBemne8/JyZUiJiKJycE3JLb4/Yukc4btwfqnarTR80fK9r2GaXUlUxh+O/CpOUQ2HkdjASOcuQ34HFtjAJ8oUY8z9zw2tbQAPMT8E4wrmXaqSQY/MBEdsSMCeXfydyS7vo5NoQwuF2Q+gGZY+QGWmaRKs0cn/E+FCEbEC6sQgLEfzcOUXDWPGtTVCAKJgBnRP4BPzUuiYyMwai+fSRoCSrCE6EMAPJpCe+CyXzMSb/0ce6yYrOjTQWWewitbtFAXvl1L4MIKjKPrLAUzgtAZZJeCg74umeigYp6TsJD/Nr1AnDznwu5QbQOa6ShWny90Hwz+VyfmkEWRKTeUQTAYko0h/6jHcoEE2FNYdhI1BAUoSJjsci7PdYNqJQs0AFkjdCVzc8tsB9Cc+RzDBumMEwx4ZuolEjA1deLrLt3hGLzhOm/daIGJpDnp7fmQM8jK/cCrITs6Ips1Siu+M39AVDMPKLLq/PQwcfWJlr4rkaZ1dtLnHTsn5mW7PHFKy/Dkt37BbW09uAFzkE1/NiF0NTd81RHfeDslKH9yUWQ/ZmqplmZX0iMcbHm03XYsmcwzDRJWDdkrvnVjWFKGPZmRhF9SZ9t2Hs5n1yKsWYuJURALN67BZYT0OP8xNBd6hs+Ns1iD+R7mF8wYBCoRL9qRG3FmOpZDLz/HYsdQY5zFeiDv7JbJOw3e5X8YUY48IlEKm1UPbj7YMxbEuaaDOtAw0A57THCMmAZhlvEaY8IqEt/4QvgPmZbC5FRWOuWQKLSoaxoARUyDGSuy9UiZCv8ci9IRCvnDGJPUUXvxf1Jiye/iuM7ij1qy7iVFpuLJPRnkQ6UNRr5uDqwnR7cZ/9HxmI71GwXjdJd7wnj+Bdwi9+61TeqDxKikbpVq+LCYitGFLNw21HOA/go/J+4p8n7WCWHbvXZDL8OO1lSfBjpqgaONxJfMALvo1F+rldriDNgS2Jx+UkWxKb66MaCrw9L33cEaVdAWYHHPKQUMJUqyJWZSLg6/wRlht+6nSvX95JRBab3x5mMgkcAXFo0Sd4l0xGdjJQQuI9KHgCtz9WAHWvKr+12MNzaiVMXBP5t0UvwD5UOImrdRY6kZd3zThcuQ3H7B8hc4A/Mi/UrtnjKqrqEnKSa5pOpDx+tKb4bDjBkaEg0VZVjd5PEZz7DRIlO64t+W6eG6jJJSOk9AgkUOYNNmCAHhh/eNDON4vMzaCOc+63+8M/NL7D1sUFaAS/sP0E1JlAmui6bcRYrfrUJ/gKOWOB1T3O9xprllwNCRqqo7MTeEvgY41cREtKNak9Tmvv6B5ft523slGsZzlv+egnbXyDNlyq0EDaJxCyl+sfaBdrSloPIgXI7Rb/kQptjoMFslw7DLkuyjpq0wjowuP6+ciWO4YL1wg2r9XqFfcLoNH2XtpboJMQl8HeO/yBfLHefV7t50xVBobOKXqlS/l1npBzArYsqTEyJpIsSFyk09HcNm/Z4NTG5V4BRCCDk+k6wNqmI+0rTluFICGDPmyVeNg0oFohrcbSa0svZAaqeXgqTWA8HnQ4ifVrQY/m7qKBs8XeA0UwBS9dQYZ/X7GSgIT9wVcNXY9JLXD+vrh65hBVsomw3FQhTm9s3rUEFa9GESSzfgOIp5Ri8vY3EfbeeiakH/I0P+duG47ZRAznAjjh7J4l3PrZYR23YBvsBcxSPuYzgCBxqWuMTzPA4UtXMcmDzMA25Q83P+1uCqI3BXpq0K7EqCAcfMQ2zSyzBYAQqJSaD6ASaPyM8IO8vW9iLiMgn/3HuLj3pP3Cm6WmV+1r5doZELkA+b/NKTabczxK3GG5xbDfixlK6J4hQc80UPtlAmRdPBsNE2/8uF7qP0Uf8XUfFfm2VNJQZA7S+PdQ/8Qk279H1lQrcst+Phgtc6P37hbpoqfglqJFxup4vOSp/orzRL5gkEQiLKz29xBn81MIfkuKyeai4OpQ9abHjcC2gHfnBoNqb+Fa1JCLATpTIxpFmfjCb2H5svRJBazFIkf540vWtTgwMTYWmmN9nYpJxvOHEL88Xc9qR79cyAuECZi5Re+Q6Lis/0y8U86D6nmDKZ3bkkCUefkgQJEXzP4/QL3zmLaDBoAoU9Lk/IDPwHtzEhP+AJ8PhkNHR7VRJXPLVGMjsi53DuiGgYplZTY9wx0dkJ+bfITrInvoCtEvp+MEijpt7OT7mngWZ/pC8b1DcfGV8xqGMFBPSPZ+Tknzjab6d0RGZCCDv9eIB8+ELvQVJojMSM2fxyAxEtn34yc48+VH/er1cNAQ7Nq1XtlVjF7DCrv9P/OLTGyfxaPJUyHB52emouAf5NL2l4naebh0g/iRSzuZw9YonX5730EZ+0nR3+uAEpoWZEGZcDVYbn5+0mGwUZssd2lhjTswCngBmSfLUYjAF1fyPMilBuRgquPjPc7JBqzwH1bvoLqiaEVl0RLKjkJaLBFXLLiQuwmIpdYh8hjYRCoJJu4CPXIJviox1KCNyGMe5gcXIjUv/+eJmZ306GWzq0PIgWX0xlvro9SC7/zAlBqgJsfvZREOuOh9W/1xcWNIxPZ+3QX/KW4OZXrykdkOpv1fb9Pr2WC1yoKMNY6hnEIXCorX/rYhiV1HWIymPikmsqXkk+VBPb7zlHVAMJNr47L/++W1lxvPuq5wja42PIqPsc3ZDoXm/RM+bE5Uu/wEoR/eZc/6OXNvlsnqUWTCe6kcQNoKsrUd8uhiNd66uWqlgUMzYaxtXPquyxG9u2g+AGIVpRSDkhZltqzaDZqWIlDlNLpoLprOciy+pit6Iiz/vQUfci9FNhEYC4whlef3x0Kfr+DNdE+kcOvaOnBc+8RObC0NpJl55Zb0Cq5cwae99+rtf0iwKhQ++gnkTqeJ+hWY8aJiaA0Qc+3hj8rRSBRyMqd+IG4VsL8VV4d5qeDJ4OoT2nOCD+5F3FHN80KxK0hn61+EylOWoTkjGgWPSKlpw78L5rDAjiseOLkREgpQy4HiThd5usGN1vxc07JUOY3thtMPCo8KSQKT9MWJbElRsrc0IzowIsJQDKncTJOCrzT9uP4ISqkkxbW88kQIJO6WH+nC3QPURJ5V0U5PMjZukHHmiMz3bJhomEFKXD9WFvy6z7T9DhWx9RJE7wnaIEys9dxphe/ASYUc349mc5STCQ9oPgktqTA8AxUuFX0qgoIjB0kPoj9Jn0uWdXXEnE1akB5S0yti3ACLsKPLaCVTyuzyc2Az2zeAuo7foK0F8VE0/oKKvnftP+OCgY0MI7z2DSo3imv9/8/tfEVASHBElGvYFQ/KDKZ8X/KRGiooUpRM/EuMyLfK/9sxxftr8nIjjTwDJMIRydFAEhqjdyKOaYAX4r765aLTOdrzZ2YycX3Ni3q1PsZxx3tlGYYAKIB5pZbW6h4b+Zvt3MbUrqpUEouEvJzyTY4uRvuOOPFd0qeeETel/KNEuc9CeqqF1BkVd+XeuEwOK7h1FLFlYyGDsu849COShVKGaGaNV+/ab/VmZyup6k7rjdZbk3JFGOaaWgjufW2sMO0dKuTNIgih78ztwFyDYGpiREM74hv5C3vrOyI8o/llh5GyvMeRmao/QrvOH1FEPFhhpACZxEdwzsUA+C6fYRoyenXWAHjoJPn3iOpCktXAo5inFnPA/F0lS7cOsYV/dAli5O9M+RFll9bXbh7YUZNsxi9Zj4g6x8S1szUVb5HwK4fH+o0ln63VWkC7jLMkMhrPWJdJlTeDK+igUORBFNlZp3vJ6zZsAOUnSwMY22mGsHL6PipWxcZbDlUV6XXn733WMpJDSO38veneOOpJg+Dn3kfcmxegEV2WwGI4XI/+pNXBba0sP8NQbusJD+G7Aw5HD7NT9XQmnh5+tVxKuCO0mOvpWRT21saONKTMthrhVKwEXQAmx5pwUn48GP2vi/MgbwqF6Rs0j4nmmqaAM0VxHSXeXceGWJTV2A0xOpdo32SjYIkCGdrvjUQopC9CsQyuUbFg4lwwaDv958Kb9DubwUshocG4k8LjsqbDVOHhNwyraZUen+WPJPpEc2Fs4z3M95JJYhi/ZDpsbXZuIHDjJUY0UjDw0H+gq9U9sR0FS8+bzUUlS96BiOgx0sJtedZ2davzHCkvPiwVrec0se7hpxIuFrNM/EGDV9flBI7+D1FNfPdpbDmjI5r3WYF8KeJ0aWQ+kWEuP+qhJQxR9OW+4q9cLBgPuQ0Ue0i3+Djv1aXuGKfCBBR+uOAGodV8mXiZSP3xXCYb3ES1KzD8++T9wuravJ59jMqqJjEqwxGLXpI1x0ruy3rDvQ/t/WXdrjzvrmIVM2xlUXLtJNmP7dp2mkx5rQNmmtOxSc3ZHnV9Sdm8KYfT9oFkuEgGXGrdKiHDh1ls+saVSwrEpzDQ7vbfBJM4qQsyaHv990+xhwlmGUq5I/W1cMhTKY4YLluXgxywDBij+hIKMro9uqzb/nAcmAxRPJIhpOQX/Fc+igHlhDZrvknBw1dDmKpaSXSZDtS+TGpXSNPyiA8+HZUMfVk6kajHeMMr16uhcMk9NV90ESPBiiIwPzhYYp/bTuTZkMnIJ2yy0fBIRCMAsPN01B5xJWKzJyVXi8QFy0hApS4h0r4nZn8v3OJ5GVAU72tVq0cXeCTOpJCcSeVAS9kC81FfQdlVuvd9dgK4QA64SxYw79bFhvsm47FVO/9aO8XtE+hOUnWypvWBfJeE5cxKVNGn8ukBB9CeVFUItwmWBHfxKAC5UBqWI/chs2l3FdaidAmt7jDG5ezwqEI4TxVtMntKRLq2kPYvD6Ju8IwaLYYkgDXMsx9JeuqxUFpKrYdanWuYZ/i6mp2jt3nfhH31+G9wgx0cq2fP+r3RJFtc6mBOX5T1sGfC6rT2mRBiT0Bxldf7Xbn0Ev7ISAATt9pcd0cKOcqQC2i0BXzFDf7+k32a5S3H4XETZz9/hADRwLCkat7MFulD0YT2DL90B/y5V+XNBTeYbjG7pb6LKgaKe/WqBPY2EJWRdMYQs0V9Cns8AN1K1tV/uEIv9yjyiCxTXZDpNb8B8IjZEAp40B0tTG5KQta/cmLY/eMQYuv30j67x2PT3H6FSx9IiT5fzztMxTDT0FBWoPpjPKflHTcon/2R9gWxELe8huGRIX2A9DTrLnAEky/8Ef8ka7+wxLWL3hpXYV1cfVvCaQDIYmshe+FVi3OAgzWDyfW5eMF1I3LGlcSC8+zslogBVg12n9v+WHnIxvILzv5weGupOYxVJyNx3BByer4QoyWZlSJyMIaCugkiQI3SHVIKrQh5m1fxUh/gWjpx1LQ767mhEToSujFsSSv/bFK5r8FMUxNc6VY1NfrGqp/OJeoBxxYl8CPqoUYbcHUprgG8hRl/sLN+X2Q43/CcepNgvFCjnVpBidIeeDgg3RUt46IHu5EMJhsjlHLWENYXCIPvpiiaC2vFQ1FxM5sNtyBQdTrAQ+UV1w7xGx/+v4i+X9IVXry19WdDr3yz/gnyn+xqQ2v5ffk8xdgXFsS1VQ+wKFz8TpI9ozqZW3qrEUIm0MqYG89fYQi+vkhqfaYCUYOo4c6ji97vBiuJ/S1nJsf4D6VpTvgKfzKlDMo32lSyFXtIfegQYgBEIg1HijcD53vN+dqNGBWQZD37r+3DYTB2w8dQEdyHfhgFOXcscJvQnistVVtKUnleZtvuJjPjMWBXOpQpB8raZ2sPOl0vlTLhZjNMW7a4Ez2Jy6hXAgU4I6jl/cka3W27L1Bvfuwxg+7FQhaHcIZnV/wy46XdC8V7rkdswFMlhhFrB1O1pHnZDEWDKQTg7rf8uI0dtOyNmp/914kb6GLq12azub7KKqGNh+65hIWUTPl5jcxv9af7C4gnBnI4ELym2QBFyVK1Q+XIgVX6DC6ny7Xv9dLSG2dc8QK0ySpoWDhlJWtQduZA4UJHkZJHZJABX8rSwSQGX9VKNHj1fNQu1cEWZHzTzVUvr0fa7sUq/aTwnqpWYrVz/5FCLb1JlhMeDc2siqyu4B7eOuPvbYCWHpcUOcnZqICBSO3rY1NBEqJV0Pc1D+l6N+Q8TQ/iObItjgLOwOSZr46NWNe7vsjHBJ/qEMpDDSgrWGl0OaZlOVrnOfc/hwUtCMrt+cq4AfUpZ0nz+7VSxtaEiKEz4CD+ll9GIejeF1ycrxCDpJgn5oUsRoHPJkCevqKisyp2Yga1DoMTchumcXqAAFyY84rwELfi6nXKDad/KkQbVA0Sihn0A0Mw6kZx3iPn7z2unQRSn4Ij4QdWxrFQRxx7V5J3NMy1AooHTWCIGbzwJgjtX4Y6ADGQMZ1TRomMhtSKgOv6lSRmHqa2tYD5t32UYAJmy9e6iWr8ixe4gXquLo30XGH5abDuVrfc5OaAVkO95LX8Ap4Ii459kzRHDtIjeqfBJdrxLkRRrQBnEpj13OqjYfvE4ZELbtMUNOqVokL0pUXlkdlk6N68xnsjua4pvND2Kb8P8YRnpm1prDXWYOtBtq8ale/XjxUdQZhzTqO92cu7/T/PRds4Bqoohbeii8VbO2m9xb6ZKacg833aLMBT727fSfdVO8pxCgdn2X/Nh3ZiWLaYV/Ffi0yNNl8JoO1Ih680VkC3+Uvo9c0iLBZ8Cjs73pHafoxmSNcV7olaptTOKYzGcdlXdi3s/4TLq1wOtO8qYZLnsOQ+nMYpPFNJ8zcATqYZBb6teqvJ6fcab/xnPojo1+1Xf7oU55INr8Z8atEMK0LTXqUdZMneDszxiYAidngeu2Mz98zRZorIggOLFx4BbKyvawCZEmugcJ5lHsu2jwk2ctrmsQU5g1q5uWwQu10yogCQ/ZGKJD6cTnvkXfJi/H4V94X1g5gjAg8oRzgCytTImNNHnFsbJRnBuJvHSQax46L2gChsucfwAdImT0CQnaheLE1PWELtn8fiVOQfdadLFnV6WLchQnl1eFF+4K1SwE4uehPYTB1knXxqBd3qj3sOyGQp4A4cMCJocCmSaaeFD2jVjd/qRjSuVy4DwlyJoBQndwoe6KHYzLDA8xFrFz1TWnF589OmU2UNacwHcJ37pKY5YlHBaWUoDhEl216xNCr9bMwV1Skso6hkFE5qJHHes4CTMmJm0zSHrPFKxSjWgs5ocyp/e/EhZWrfn8Aa9PpfygiObs0hGLV2FQiOhKCPq8aBlFv7EcWpmpkqPaB5CtX6lBuzUeCbRMndxPNCwpUPxriAnH0aFMB1fCoH3bW2ag9PgQo5HXTHA6/5ZKf7t5AtxqN/3z7TrxxF0bDlcb8WfPzxMjnwoaTyIzDhTS/GVMiPN8kNXfqK79fBFaEnqGzs2UND4hgYVNTQrT3JT1giZjAxlHX+9jLtkGTQRvHslpBGtE+ZUhMvllKpKCNvB+l+WOa8H04NGUWcl/soDLQmukKQ84ZGXx65DzZNUcnzbsVKvfFde/81zAL5fxRQlKorT06ZNNXzmcsF2zmGOqEDHy/u+udFKc6ThPDcgggg8DT1HKnhC7qzY53Bg0xTXcGxnSjIef4Rekt+0N5QW/Uto9j8VQDKUGV6OGFfp3KbcpEFMrYzXacrsU3acsBv44AEwEIygHU55BAPTnPfB8i6nm/xDWsa4WzNzBLHJSkmqVByJKsb6g8ZjpozbviLWw+rXxiPmW9XDwxqE9vPeKDnrkejXfCCCchE0ae/c1lQRhSH6nySVyuas0N3my8mPnvOG82wm+UGEwWmwqJ4hs0ERrD2FwX2D9qNKaRKYm255rLEyUQso8Edz8L8SMu+TVgb9kTUMya2l7CVj3EVTgrgR1fd1z8f/4cv/InSrUjKJBEW6nQ/mZTTyseRrYhPjrc2tE9NsA2GA8Sc70JQYXAojb4diuvJ9XuF3LoPvB7bHxaIwwOUKLS5T0qXi9MQKwi6qtwHoZzHCTwQn5xY98DdWHoy3cK/IdenqhMFXMtcYuyBVdAUAxuqgZlTvQ3uGqsNkk6WzeqpATqIqa3t6rFQ4TcLAigrUc1oz6WxcuT91L/SLPmDFbGJX+zMSpc/8dhGzqrYr1nm9/XoK6XHBlnSsUNlqHpY8zgImgtLZI/Jxgzo0IYW9X4tQ/ILkxNwlINEb/7aMcg7f3XDzVVbslCZfdRqblycIkhrUVqXcTSRhSVhn9HkvjnQN3D13IAJh1UmJ6W9eCWqbLIebyIfudgZzjj0m2iIlWQiSNoXXZC3KrfHjkXm3HFU+FZZ8fw71qwS1xA+a9Im4VldtOjnczEbBWwFRlmP9bS9BvIdzDrPs3ptxopV1P6w1pNCI9iQNM71AWdEa9R19EgKPUqp03PhRmidIzOrVVVmGNrjjITFntyqfm9m0p+Z6F4bpwLmTA6f16ed1qp4yHVJxyHrp8U+T+6uY8FdeWc07PTRGOaNFBakhVAueAWyQ0d5Ppr2YvTHBhcr+W/WbJYgv/Q/4FnzpEGQMPE0spDZICl6gxhUipUGOvuBkovZU/4lhW+cTi5t2K1GfIpR79/5cZHDOtQmJPMZ2oIpvE4LlxHZfJlFXe9YZCjAQBlrqVShuVNCVV72BnMuExHTAy3QYHgYe6E5FGu/4l7FGKPL9pM3uTlg509BIlCMHFkTAZtSjdMoTEl++cnC3qCCG4QTMXWcSwtUHVPov3epsKQ0AbpVlZo6vHtA2q5RiBMHaxTwbGaTXtgkDmDxBD7kdOr+6i5z+7R2qZz7Kz8PS1fSyE+vWSAxFyNj/ip2nk2jrqa3V4DXTwK1uBKh231ioYks8N4uqUGsiqsVC1t7iGSO49LofjhctYqAWcXLLTA55142UXjkf8zmHbFLLy9S2wxbXdiKJiPvhs7I6eTTJYRjEy0/y7glg97JygPj7ZmguAGhHj3Nc3VV4vimkkJVe/m8uU9fFY9WVrbo91JQ/YxrO4S3Sjm0uccAkA9QevjLQ6p9/vJScKzeBA3nc0tH+ptVR4DI62MsIICcfyPfyinPoS8BH0WfRhvZJjFiPwb5tlBpNcurXzmVnr/fpn1aTZvPy5FCKQVEYuOKgMmjNG/zJ9vTd7u/l8bPhTjuu5rCCQxL73NEfPt42VjZ+qPMspKbUj7BXP1C65pWMyDxbQgg4GpxZxcib73K+E74tExmPGoTunQIPfT0wI+VyleWs4YzNUakcLhSVYEJo8bTxmvCIwERe5BU64suojpSlPkUwXwgKEAvVXOZUa+ySC6rdi+FNGU40xRt2BCV/vbFvv4RyRpy54Oz9PRD8o0AqRYQilNkO5mw3gElKdBNwBKBCTeQWNDKq0PJ2rlkGS36eYFiEUpgTh91LRp+RxSOxWcKqMMXl0rXXz8Al/pjRqZba2IvqV8yamqNpvjmfD2u75dQzh70U7S4LLg3cNC0SmGMFViftpzSgUiFe9l5uW2anC3UiDfvD9FgY85gev+Cgzi5aYGsUlRdDgWChU3SA6o7a8yaKBlWv0Ho8+dQl3X1MbwOXxcCsjrulnNcMV0Ew3v/d52OeJmPz7eiB3a5KaXkFj3BCHuQV1B/55SmZ0ZO2z+9Zml55nFQEADtQdWxV+yrV0ekJ6yjiz7deFuMv/h5y0c7nKHs3nQjaFMQtzJNB6igTPnmZ9yDVI/Ba/NgDR+ld0Wtsb0ZQUh6qNnUWaR2K4jSYv0d2631Mi1fYlftvWmM1B1FLBfQiJYMjFVQqKRdrO5ZRx3BYOqSfj5BcVBNWYXNkDg0qxbdqUXj6/CzVf25oQizyXv0n1Z0dBqofDSKeb1T/8APuDF3QI+PrDokZAQX2309NFzqrMpcShUqks8HJIWglyG7PVLGyF+NcsjNYF4/DFYH/n5feqBm9e0d2peAcPy9UTkcz44+4jo2wzIkg+w7msp0W5pe7MtJ6uf+IiB8rdHzlue1VpDdVqUnFB5I0IwPO91ng+L8VcdBBSB9Y6B/TAhrTkxu8HYU+tfZyrYQ3Q8CowGVs3H7lPaMM7m8W/2aVoHlXmllyiKm1LNK6FDHpAQievB5KxO4j1I+creqCIonheLqcErAzcKgMAvxJEzY6evtkCUZQM1bnewUMHqmNSfmGGzeBUziIYtn7oHT2TPuJ2dpY6hclQKwTPzJ/ezzO6mWxfNWO/pxR1UghEUch7mq/FH21t1GjAL0xSwALCR4xw/qr2SnVpqgJR/ul9vGRF/veiOILDfBIVW5vLqapSIeODSFg8HbxrulzHYGlfVK2cqFYbx2NqQJMtV0ZKLZkZtR6XztMCx5w8bUiXz8/+jnojx1f8O/lWeAsZh1AnnvDrS/8kTbmPgFiuVHXEsBNzEoDAsdtoQb+dId4W7LuGy/C11A6Bw7OGnThfLeTrrQhNiJj+VQZlNKhuHlrtXwW+B7QS6OzL6vhFI1UGq6fCpeKzbPXmnlXrntvV1FFVs5DE9rhkQUGShoQ773n9kIN6PI5RDkaRACJnLcwOWtnTit+0Qn1JUxoeWNbwz3gRJkBmg1c/zUxoPxNkF3c5zsH0UcFmQfsYRxv3TloYul22nyM+vDlRs/IEVT9eQJ2Ja7NO0vajt0sHSbpAoKf5SbeK3zcNhDpGPGY+pFhgNldfnbBoXKM7LvP2+GuNd4xdzbPqNOGjc60iIi7yuiwZ+g3j/kydMpULsxWLIN5ZWoS+CYnyEfsgjgxfAcidauJIMEycsHOxV2fRrl4RhEQAmDV5+YQoToIwUzSleGaEng1r0K9uzBBmGIgUWmjf3NEkXBEshHg8LxcLvSwCeAuHLA4E0HeWTCtz3RPrd0PLMJHaETbg60ItrppT3p7Mf9bzjMlg234kJTp6eMkaZv0xNHqb+KIvyO6liY9kacPEnqQ6483glQFxWwmIHzj0Orz1Wsqn9qfoRdZonu7bX/sPu9GdU5StblGOFrfYodJ8Y9nJtTUl8Jr4/InJtrmBwBhNcYyheYdPEmltdX5W5tpHsqqckfAb71JF0riH+DOlLWWvjctJbg8sbww1w2I4kgnWcbqUHlHXVoMSCDvrN483cnx5q+ZM9xrekkLelAnIytUueD3I1bO4oj4zrleLXXAfIt5uVsywp5XWtuFeUKVpBIsREmktwc4OYSPw/eBwVq8s5I1hAy641MJvxpE/OMVNjWW2wpN+y05l6r6GAQ5VUIsO12P66pn/XNPlpqzDnjagGLVf2+gbNJLjNgFQC+hefV6WEURmd4ocvmMtURxR2mSqEI0XWAuICBftLbO+OvZim1okYOOSUkKcrlznOroLu0pOGMYKowVrK/oWjc5XnQr0LgW0Ktg+wSEDKy9UrInE9pUi4oMyk1b/juSbo3bA3kpT0D4VilqadfQ2bB8yZy0O8VllolI4qFALUsGr10i9J/F2yx8AQIAquvWhdD9hekFd1bFGX0TkqQyt7xyjTJWwtlBHZCfRHJsYo4+YamnAjSTjgW1hiHEfvgbed04ar2ialEZKvw+NLeLq05X1FklDTdUFqLrFI+Kcffs/H9rbf3j3L1jUtAZ9ZyN67sO1aKMn3ptL/9A26NGdjAjXBKaEjBl2POW7peRAtsRHFdLPQUUHtupsX6U1QZQBqfWeM8XfC0FAgHp7zDA1pkNg9/BJaTD//925iR3q6TtIjOFkC/fZLwIIJiw/kJhj6ysOivdk06VhsxVwYpbbCpTVZYELgKig4oNSkYFexPikj5R85f0gjuILpUMDzK9+rNdhpTwnqHZTOzEBZxmstG4/gGJY/prGIWAg9JzdrM0kJoOHDSsU/Rq9NIMArmZs6AkDb+lNkCa1OQNNexVNcIk0M6XV37QfrVjHEqDsNwxakCiI3ngUEgbEhGVgo2m2NDxeO7ifzrpgMYVSbJHp8eqJerfY8YG2JUQuLeozjL84g7dKAJ/bfOhiZxkjb1pElgRJ5zxkEG5wioB6oGZ6wRSD3pTJgjWeViGK96t9gCwVVQkZy9FEeVSnu+pVnPV7dqitBOryZVXpYUDXl5JbWgfS24VK00Ez5Od6m0EgR04sUF57xKEjPykenl/IuVWBHFDdxlUBlLjgmgduqigU5zfOkOwzBaZd3Ty+K33B3MLX2TTwhIC+u3VHBsl3Q/XMhSCZIbk12O7Ryg7mL5WHZQZedHzrRMHEgHtYm0OcOCOX9Rx8OLWfHUh5+XYpUo/TE6yjb9HlltDTnWQ7gH2Zwcpr88RTBUvJDHDZx7XSATNSjHrY0sHW/Da8FxoASB9vPCZz6hj3gyjBX5FV5ccFsenfvgikLNwTpfe/nhRr+JhyvNK7CNQzmHjXrB6E3npOh287QT2OfGWTpO0CdWRIJ+kjuZUrwQPL2UZcbsHvDSyZ7LwdjD+iMZYmChvOxeSdDnOn+Pnc5DgToH/Oo6Hs27fsNLYzlpt8rPogKnCV8ncYhWNyJTPVQyn5xz0/z6/Elh2C5YR0HOMaf1/BO7Jg85+vDcw+RNC/OmaPtGRUAHrO3FyfdfadH7ZuB9tk5WTyse0PNRvYUPSKrag86dA6mCj/V/NyFGm6qFteUwTTbjkJ64O3oJD48cowsDgQdJXGqyCu98f2a1W93fs/kNdIRElDXKFxLt+GVttWc3DyByRYVVuB5OkQa3wiYSO9vVOoF7YclGFAntqVaeFEw+B5dEfRtkEOwS6mz/2DR6euQnasNl+9LykPYObIM2T7zmi/DUjm2MzXmj2YVrK3nwFTUPgPS/rUSdM22YvNjOq0nyLGYndy6LbxbD4opiIXssuXnEIUlIFxKoY+LaWp9YSrFzCOrnJqKsbfJzrTl1J9weZnXTxJbZV48P4CTc/pseyDQc/QzPbVWzjDuhbjkRE9FY0NuovpEIlAOszTwCIeojj7Z2Ipk0ikzXkzIuGnnpHW9G1ad7py8pyT3/IcabNjAk0J/JNLdpC85cmVUm6ZVfOEuwQDzhvJQaSfWf9kQijMRJPzg5iFuTR2zhIXBEZylVsshngB7oMGlJk9ty+Vse0C425Mi+Bux+zUIpibZpmqqu+bf+VsnLpBJ6dzNGq7/rCkHRnwBWE4XXwt0uqwyAZqN5TLq9eAiLanmtSJAah4kQFvwQs3eMLcnT62eb0PLuUpAgo7VoTMH9zV/vN5lvYB1p2Yl91zoUa/yWKBFWADcomeuoPAv907z3VkSRZey5DUzlrm32Z0dUsHnLn79SG+6SBV2m7Vja4stai3DFc1EwhOtqjBk9HMTGlw32dFXxCV7rJuKF++lacAZfP6Qm57GVwwBCD++RoMH35BPuVDNMdxYVMsP83HZQBX6dUKPomycthX9kDQwb+d3+FtYuS5W0Xo1NOEzC7FiWfjE6bKCo4szt256uc0Ymrb99KcpUjxTLKKpeqhQ63HXTCb+rnuDR+3vJdYFIKlBQB0xdpi4PuEo3N0M0F4vYVqo6QgsHJrBAU7l/8zwkB10q22hoV4ag0RVm0sWIbCFOJAG7KirGmiK6kHKr2h9BSEWTzP1lIXsdqyOyxEmIwnh020pqZABM3ncYjUXgxhppGouPTuXxuGodTU4HupsG1RcgHUvQCB6TJo5+QRStvSjYLwf/ivYpkWU9o164oRoRKY6H2F8cdY75q9RA9n24Rm1Inil0NLdZrbXJGpPfo/ElCMgRAiU27AogzGx1sEqoEgkeYBgH+TjDmV+RdPAQ2c2ErJ0lFPeaKX8wIHrJ54sc02znd3eD48kBNA8Qzf9K+e0w6k2B1oDo0q21613h78A16CNoC0dO/6p63QKx1PWhnYuJCBS+Dp1AcJWKh5MGdPXlDvlUZpKCuwVxiLOkUAUVc+dUNE4uMnnd4FD/s0fLkFzKrYrf6HIp1apQeEdTTFae6FiLqd665g3uNLlza4Js8BJeiOOr3qAv37XCKnEpoA8RGNerclpaIrWuhmlpEN6yGL7vFeTnl5UT5HMWBTiVfSNuWE7rEuV7t2iYgggvvQ3fJ11r6jTycEnh354irSXYFGEJ6w9n8o5I+pB36/z4qnkGZAylJTW2vZwmiuJABgVDZabe+Ua47yDDzYMsq14YZft7AwUDFx5ldqgFHR72Ws/g+9e2idFSAaZuPfnyBoZAsadajkfEnJtQtFKgcZrbaj/FZf3I4CqXEYR/7jwdvX0SASuLWuO64L40r26ImuQiPu5bXK5x6mkb7tZ941j97Sf2lDq5Hnxx0n60jaIjmKAr7gvf2vfd/ZmPVCbSi/RpA/af05GpPy1kgMln+anuXiILvfTqTulvKizVbSxA6sfiauhAXi1TZNWVUNvv+2Xfg36IdyITLxn1VVXmX12IimMAkToVfmv2v1bCNas9NNYuAAKF4koh8O9OiXWj4q9JKVn9ZNzjk966Tb7AeyESeTE3IpJSgdfKa1G1VlSwsBxqBTzS8RCTP73WxtxO+7CtM6ZYo//bhJ46hAbk4qRh2/971FWiTJsdgx8ETOTruaopZPyIWPLUarXz9LLYUBnlByIR+DME626ZChAiGqphc9gp4FMFssKNZKDAv9LjubFLcQ4BcCWoVxj2Q1GE8veWKGQs0UsL4VPXhr2fdVQgrVD9MhqqOMKiqjtq4an1IaaxCvzWyFV2PBrMlLr4X5zwCwN6Nlkfv8g8vGc6DP96dA2DLMR7xYT7UC8YRkEMDGBE95Nyx0tljRoyNge+ruuVTGH/LgkH2/r8Oq8/Zd2c1zFFOujGLIR7WgHwpA2viGR76jiWewTridRbjuhBM7w+bRJJb2VGehXvDGpkUwA1xtnkrc59UaGvaEw7Q6zNYSsSrQ7ZYB9fSi1FHyJP0nQ+iWS8CrDkFtfhcc4u+iVB2MkUKyj4YxWOl8nIeqfPux74/GToK0LRpP5TSLfhey/lIBdCGBsaGISE+PaBmF3gOedi66sdBYgyRjSVSCNNPv+/BC9N/pUA7PuruCj1dbSmY/4tSpmsYAdOaxyBwPHF7dTfkIQe+kVR1VUO2GspqPS5xLW0Fe4HFw2dW3eFFw1PK6dTgVfE1IPXy1pGaK9o6U0nUaD/VNW7W3SH4JzQArY0aCnV/mB67LXMIGlAZkMbyfYxGKmEB8wNJKTtn/QoRlEyElpXqJwdrq3xMyebJpB/4rJqDIML7hrnBOXuL10i6k5wNUeik49FO+3RFwmR5c7lxtDutIeGao6RvDL3LsNPRN0P55/CodQIrGIp/JfvwZRAknNZM7hs5QwwT7qiaHaABUcXAMnc+Iub3NxiQQ6igpgszXHKXFMiJoH/UKaH94XHwHYVWmrLloC0uEueLgZTJRhbQkICvVgzJ/GhzbbwyH87MgHLM+oCooSHK7MBmTwPTU67yIM9iUjA5+YY3WS4P0iHQfK5mzply/SDnqDQKAdqnzHu4rrlNQEY0lxXo2rI1XNCfF+4WRh2RMQMOvI53gYRXe+6+oRcL4W6cRTcQRypzt3Ei6840IW8xi00dBUVE+nVbiDoVVV9yR+7pkDwdfaqsbKvJA/yVZMosGdwA8O09PXPvEZwCrYMcs7JL0j/odSdkLJwrUIcL4W/EQxd4/U3GYtl6heivz9NiHGkoGpQUE/gSHX/7/YH3ybC/P+4hpSbf7JwmLmqDOMfRh8prpCzBniH0IlK3NtKCQgx4ce6tZLw89Csq7E44mROLcNbvTkewBPB3sS96H4i7rM+Wldr1V8gxNnj9NqbYoOYoXsLCecfDSGnSeS8R6TWz/f+rgr5NVqiF0zFsHyY/CKHttY5O+xmRN0JNydLacLUuEMgjptheNM1J/4i+s+AGHN0lboWsBKBLuEf+s3D76G7/bwn/iHxFVfPdsOgBLqJrlkXScTzWf7mp6431tpT7uiWN8+3LHQbjUY/b2rH/1U0H8Rm0Rdtc3SRDtFuprIEkQQUMDgR7SuDX/5wTV5yNLjcqqMZHl7bQmCt9moMafhLnVOpIwTUM77WJnGVsWFVtdtRUtOW3lRbaeq5D8K14L+J51p/3Mw9vrACGzWhZ0mil4sncAK33twH4bB1skKw5v+8i2f98MuV3da8n+gDbt06wB+E5e+EhUAPYbq68F+yVy75birO65bYvNROFiCwaFbkO366hDhg2jvmBU0AsHP9dsDKQErHBW/OisxgDuMPHhcgD4Fdf/ySTtl8fAtkNOdyrtbw3nse25F7czCMaW8L8ZilMabF7Y62RQdYihcYswq2w7HfYi66eHYDV3AIsK31BUl8uYqy3IaDlrgusJouYXfKfkM7cD5e1PsEnfNB37sIMfgJ1B/GGLWznddsgvDIGMbKxNmcaub0zVyH87AmGBmEodYzOKeunLsbW8Ii7OjokhWzOdf1LJyye5RAdp13udpR5Y9E4aTQXMmarnREBOpQ1BVkJB/p2B7J1KQjr5HSKTEQfhwn6S4OVzounq62dfSFtDQfS0Gr0beOUCt6mnlsJV6083ML8M4vCP7U4MdXcTLi/EhdwLAsbUoofRqjs+nvbYANEiyuGRFv6tHjU69uq8aeyu0QkQSyJdohLLpjrWC//4s/jkHPBPX+jrg03hj8M3OXdVT4yw88nIdDmnFh9EUADVG1uTScQsnmz8Uxa7Hguv5uI41RMJs29LTLoGzZMt8Dz9ajYAH5lf5u9IDb6yS169wrle/jRzz/KaWb7Pq3iOOzvoX/u9BTtOef3mo26oAmdG7i+rgimGjgMQSufqFe1A5h+WH6Wye2Ue6DYag1SQUr1MoG6D90qwkMTXXCRAc6o2x7YPfV31rHtTyFwyi4kNm4YCLKiDOs4iLk1q565qK7IGFzH5jcABPOTUSToBNqow7P5UCCn/I7mASr73GHCqhvENWPvZSzKiOeZIimbYd+//TTuPgFwoWO7HtYJ7vtnqhEUZpq/isgIVngLmUT24jT8lE74xoPm/yiPffW0XinlthSMKcq0U+RiERk5Q5jy920Rb0PZoCYYISMDAWB+GFzFpO27/FUfp5DLt04RDMa6zYfdsIKtGBpPhDA3FvTWiBNkhOMYz8p6aveGkRCMEECOdcKP7sQL8/1iCeyXFKmi3+Jxyfr1yyUe0wOm+y+8/uqa1VtIpzorMOGWMz0cOoOJltR9LpyqDwezF9LDoUC457JBy4U1Nhl/sga+KIUVtH8N4S+ViMPVlTQCCmJMrT6VZEFZpygKo4L5v7BNgInQREi8SAaj2+i/Th7NXJFHiZxVXqxIPf12A8R4TXU/84hKRf0l667mPMDP9VjoQwOu+HL5ONgcWSn+xIDFaauzDmw6wTOWRuBakbcfk1Cq4a/wUJuGdSn6AJZwmc/qp7JgxJ9lWueFhqcxSKRMeIKnypfchi8aA63zjxz1ioh4YTSics2tcuoDCAzAOm0CA9hQVWeCGq8SsDmREFTQq7JpEXclpi5d3mlKQH5LnwmIsZ2MwroHrewKCbvrBpovBzqpZHi97pXyP3cWo64UIFZwBj/JwGIb/+/Pr4zTlXuWv5utvG2HVZa4e4lbeuqIOeSgDXssQuUIAmXkjAl3Y6RGMWVnvxlCY3YLgTZihlQvgJ4qCT/iQ0o+P+Zw5lk9BVUtVn7BpHby2ZLs5hPjO/KZvGsyMLk+Li82xgE+XWU6+oXnmD5a6uk6xcRZN0YocMlnkxdiDdBfUFgAztL2k2UDLqUr9Qs59RUuxZFpssGGXYWDF9rNAlrwR2munjI0+0M6yovK6XRqemJx5q3EamlxVWVU7WUpBodYPq/cCy1ONOo5sHhodzVera/IMM4zOFC1dkQA1BKL4DZ7rzrt2kwxG1xPxOR/pnhMTCFyyOiMbdExYK9nRwjEwOrKRKeNZZoTQ1U1FjM98c4r5cHVUGBPqzPKZ2DWI5GroANYRlgWTa/7OrAhCWUICA0PMpTQVXfHslhmQ9H2Qq2CwWimnZVpi7qfofbc+367Qf8uuhffIPcuwBcDca7GqMPfscvgKVlIIcajZFrpcE0ZQ+IFLoukMIvhIlSStzTpwECU2opZgJ7iDITXKUWqLe2GYt24CRP/KiSvI8zRNjZNXHb3OmJR+A0QOe/mO03KFIQY2fCT0+MvjNmRyrMhAfnyEr5wevDdUgwxOgMBuMFKtAeO9p0bWs1rCAa/lkw0qwxmCmIjrrNPRH2Mwze/NDdL5ycgKVDSI4otaj4e/W1lZ1Ty10mImSNSA9dZUf4+zhCKqLMeTLe63t1xgenVO/9AQsA6d1Ve8sRwOF/QI3vR1T51VFvTzqE3fKr7bRmF+TkjM63ntH93Z3x70OiwyWeCRFknf28mCN5ksAdGP0CIfDwrXlF+un92iazI+kqM+isgVNl3X+DYPLgpk/ZOK1NJCudoQpmMIkdCDq2gMfNP1eU3YXB66QtDN9LF5X7oQJU/0edtOI8l3lNUscRLC5m3mxzm58qnWLZTcCNBYHxDt2O1cGzwp+C0MgCyP01BtjB45SvpIllEuYaWmqaWay3wvADQ9SojG25zBz2ebbT4BrcvzzFo6hGp6ZAHrdRTWZv7gi1jsPg7vk2SGJvRmAcpKVchxUkjBa6NakUeDakySv2VfRZfTW31n4F8CpiKsfcfsIOlyg/0FR0qG8jA1A+C/j3DaaQv7ruBjDl9iclx+X0LoK6NU5tMcHl3QyCxoVLPYFz4bpsOg04iDbOMbv59fJQjTVZOeVjt7Ty/QhBVm5TNIGzrgEL3gqFmRfdX+dvP3tK8sv26W0zPjfvo1EE6Tv62h+UUEVL+d5s9fdt5tnNsCweGO/qYK3au/Lr1Fl5m2IOTU1RsWSZYvrI/MgFXbtCIrGlwIdyYuQFxlxdaWiLvAAxH7RPiSiIY8p9Ua/upiK22qmJE9LKhdIwvn3adAtCX+sNKRUPVLcQ9oIlcWAKsl8KYU07bg5wK/mhxo0WEhZ61fmRS9yX8hjekbgBMIXKf2HClaUxfaLIgFtFEmgykJkqbOTTIMQ8N2QswgGwZxQhp5o9AQgKOI1XpWLIeD95aCleA+ATLMaI/L+wFN+DE/PzVufqhukFbMPgwcQGQRCmHyud3jNAOn7XAFK5p0DB9/Pf7uVI6bwIfGZ7p+xl8IoXdnyMcj+TLEiTcK3iIvBOL6zY796MIk2Tc+KGSaebBlLTrnwUR3OLpdI2S/JYIT4nBSqfdOCx++FENuqjmhI3hCQDJKQjG3Yuk7KLoJMDAk7p3DSx3G1LLS+TaJZsAPRj2/2lMO7KkfgvqCTHhW5vxjRlYRHHIF3R2p4+5ljdoa4MxMp4jLXMmSvPoRYGbSUDp/sNnt3of5zdKAJy0XlLgLjurkIMGmwEQlKwDT2u2BY01g7bDxt/7YbiNiaIra8+FXZMmf37bSjQX0wLu/hV2w1qhC8Q6RJEfCCIDQBHUxsTXK/MnCfSCdlSG+eJ6J4//I/8MvjkCvarfKQ9d0xOT0Raa41loVKCfg9FdrojK+80ez+3/8r4YjxSExoe70A6MDSWLo3/qBmF9WWvPal+HyYP0A/kT+0Lvjq46TzkpeCldHbAje7meLPVccX6cGtavxsNrYK8izQq2osVkBsDcAK7ihfhP64s0pKp2kh5xDB5FNwwy36fx9mv91LbJ/SvbS1AKFsHoXAEsOxIBB+OQwhuDkSkQ2yXQL3jogE6ctlhQ0DZsyz+ZZpuJSIHR5tFRawZIfSMufaSETsaHSOchEji4rUmQ0LroxaD829e1AfOURu6QUf+jzxec3mPI59HhDlxTCNtDgrdLePp+Wyi6tCW2RqaMgfr3J+9dJ4aOTQKT6CkA34xSionfnyJJHAC5eO5TEEyVMcxEeiZiLy3AaBbSmtDYFLNrHTVtCJLGHz6/pIo6XBnGU9nBSHB6ZACH3Gx+5V5KotArAv+BKrstr3He2hgi0GEZ53Vdu4FqHy8rduhWnpxT0RHUg8J6kWdkLawUC9OolNHIXxc5Rp2yQbY6WU+owh3HJojZe6JIECtaJlM7bypv9prNfD1EmWXhNfCw7fYKyvoU5lFG54QdEOTFjHNIEvmm0EX0x1Q8QxxysedcrqY5CQ5ZA5k+80uiJG3F8jL0AII49du+Dn34yYikqD7cbkCiZ7PpWs3Bcszt9Kc4Z4gDfl7//sKQRlYPvizZ415LNMiTn36nyUkExLN7tfUsdXS8+g/4FrT2pBb7GTQqNRp+Y9jjiVRuqK+BwGfmx8FwQDKNBrSZAX+QbnFkFaqXZ49d+S+d6os50j2k+R6XqqSLvT6Hbm1NyXe9h2cx8v/WqdfJhMFQMJxx5j35FZ5p/tsvNYY1gQtge3hJx01Lt8CHDp8PXl5tjAIdVyvL/gQptoZ861pQQZ+9xuHsj1QytGALErcjOinNKTZMTkMbOx62z1PED2X3e5mq7Sbs3RvTS+tiVtsSSTeCIqskLh6TMucAmKziL6SrRvbpEeQf1ewPLVq4iyGqNTjUtRK+GqEGX5q9wFAKU5gGRE8y17RBYzzFNPDlvOOk/Zt0EmBN+JmQogFOq1Gj20ZoSpVz7zcIPSrAIdbUfCGsuaps0XzLTI4nglPlqhx34nYKCWbWfA+/ha9I4U2ZF1fc4ewciUQBMIUMrq5uoSdOzdlxOGpkPNpTRa1KKxY7HbJ6mxHEF0F1wEF+jxfqp9k8C5KwEASAbsSWmdq7n6Xiy6W9ToYUTp9xksFAk3X18UoehmgAyFgIVQVyZmQUmSpElcAlHsju4i4bVKEa3Xb2rRZYod5vyUkVrAzKNaiY57l0oERUUxf2uPIKACAL46jS/O8lMuD2Nxchbk973Xq+PkHVkz3EQ5FHjXPPx+i2lJVdBnUai0W4ZBXo3Dt8YnJVNW2zPhMQoP1uQAc0wkYYoBzIH4Tt3d9c9aX9dqrx3/81wzDBZRISSqq97NhU1yUTW47Aq6RR3Vqiv40P5wv8QaDkSq+A/Q2VDviFTIvSzfIBP5H37GC8CqCZnn4WpvqjYPlAce4Hv6HaNiOeHGhMIqPpJMsmI+mFI4hs8EFYEptnivdy3DvuRo7BWtjuUBxXaxiWMTAt+xkM5W893DFt4ARxkiwwy6OPPwP3kpY6hrSTaQBxiP2DP1WXj2+KjjiAV8FEmB8wnPlMSL6Yx0yZOwpUC4KMKQA/09FsKmEKskawSur7+oCe2hZnf/C9ufxMmjrQd61Jx/f5IONR8uXpCXuF/NZ4yqZLudxq+5bMF1nQHOK7sSYc2DnEc/vXc4G608pYmepF7l4sgb0IDWYHCdQTd9mouToxTuIfQ98sN26kUHRWu5jPohR52p0mbKNSq8FCocN1SfvlyCZSr4/FFs95Z4CJvrJGB0iLX41pz0LxvJOIlnoypZ8MkUwKAAxxmy23QSjyhdGZRQ45G2Exh4DbUUwwgX9Ve8hhLDTuB11dQiiFRz+hCFFQiGZoEQ/O1cNUt4bHsuktofrjuPs67y4Dch6escZ7IKcCQ/cEpV8OPSW+b7xI021FVwjmC4Wpj+U/4PHEn/dV60vJHspJzDbsomXOs3Tvo7qHbBK4n6HnDzmfHmdGWQICQJxP2WjSmsdqtRS1yRIzbYgRLY2mhgfWQcg9dUzZLnjhUCbE9GL4txXf93QTqkHxg9Q1hBzC8WO+FhYlfTlfuDcHKC2LigtqGGHP7e/sx+pmWSTlUTs7EtjP845D5UdRMhHr6FD08jn+XMZeSNJkGltiFyM3nBqdYGmNoYlQAA09wjx7Szl+tTYXjW4r/uSjCVGGHgbOs3qTT4ZGiPzpO46spsIl41+UY2E5Fluo8MUpQMp13BHpuYx2zWpncRSefTX8l0nS+WOUcvF7wClr4LeMDxvk5FzFm7PJrC38MvXSRCsxEAFRyIvMaC9rUVVm46ZwDjSM9bMu4g/Y8IwV84bURPw56GJGz4jp9ft0m6l05wmsnAP6Efa/1Sb/1G3DupuREsMeDzhP165x3GUmp1YidJqUETpQza81DhUDjaQqZqGBA+djAza7/hEQtPHZ8FA8qj5qpf3ZVAtyq/A2b5Intfi0tuAY0HLdJrkKYf3kRRqBhf7cSs++msh/1vmIMOODT/5MHfbR1725Rv4NsOmPy2Jw7Re6jyLtjj7QNmJeULj2Ba6aaNgJxZj8ZnNTYxzVNk2h8sC/2ODk5+4sMqy4/6dVb56bCLc7v/6UPqAz7aNo5qJV7UCoy9Hb5rVVs3fI9IWE7tXHIK7VP1yQK1FF+pWF+/+mpVt+oURJiWRF0nf0/2vNGswM2pHRW6lfaaZSfmM+oAj/sP/ftRan8diywtqN69kWPAhoowlAJy7UTjf8Q32aVJjAis3nXQn57MzOZJ2lOwbl3kNpG8qp+AksQjki0i41ax/H0wAoi3lcE8iV9yW4a8xkVt5ArzlIOn1YiYGuuAY0btg/nhPZy3fRkCLWdpcOfFRnFtzd74yfbk/fCH7hNoewnshR7PXcr37SNUlTIUkTbpJqjFTb0gcZyUVJCHQbG8qPKdZJ+ZKPqjEwIg1UfKUfVSnfWv0RQZgeaJtM67Ba3/Iq6tlo6BvygRI6O9CnKC8qnqnGTECxG4F8bjcAQW0F7BRoYITUO9deK2mGrZz64ozyrX1EDYwMtEuk3UagpQ++3vXzZTc8IvcRVITliFSyuWMRB1KUiD6Qw6vWPHYauMIpMPZ3E5HNEpxcfv1cElXIyssqQILOwTWzNh8xx/ZQbdX28nkwlDYu0MQ4OtBO0Y7hBQsk8SwrFGAk+IfDbqFfc31jyhfvJue1oie8CyqCi9abQTwboWZ9uD+QMK1JVcfrsTb6jMzNHjV/3mAqZQizjDskiX1HFxGV0i8ATewG9L4uvwGGeBzfK8k49tHNEtgrBEtYLHn1F01iIk9A3YyiC+3H7sUoK2s2KitEGa4D8etrAQzAFKDihdGls6+8iSx+alDrdxgkOj8y4cPYnhgJ/0M3ICldZoerTBmSoP5kxe5zZOh0QxOc9H8VAmvFSmFun0ncTjMNFaYJJkm0017X3drJmJxMFCM+5VHy7hmDgWu5d7EElCOOGZkQBZm4yD7JJbtrIDiApZXwOr4Tpb+qPg3GUbHayNmg1Rvmy1VO123foiAYUn4siwPCJeBLeGkn4VAAzSRjqJ8jCXRqhZ2qLe8HjU0nowGK7oahZSn2zUM3J5n0POqz7jjpq5hjag5pyjgFbfRXDaPS4HcH0ajo5OGnWJAbcc4Lm0OFUC6HUaoLalGceHnq7rjS/68hWycmGXAAMhLQHKkxmKGQ+9X9KAgV6PzIY96s/6OpKrUN0tW8In3wpMPy/icZDoHGm0zjj4Lwh44Ju4M3bx/SXYyzFc7znKRCAc7MnSgIukrE/TgE5pEb1V7yiPgBjWlIjjAhhjmbDQOrcaX/ED9Zlt66Gg+LHVVKtepJx8M00MAk0o/bdlEhMEu8fmiLw/B6XPzRDyhT19Ym2LSp3aXRAt8+ZBYSk3iFENFNAS1zZfjN7efi5fYDAuBTjUE4XWzTYuzIf2KsBznLSlFY39cMiyJFAm9kDRC6uxOOV1jrL19+qFMUrZyTaiAMaAqrtwUyEltlKBhLjli8uVK7WiK9OOezxDIBQvi6bajU0d6w9Rp3zk82DO/Wb9MAvaMhDlOhrcIE+IIoTu9GijKt6GU/0TwiBPlQ/mlb0ut4P86dv9+0GfglRwTv7oNXppwwzoUAsUspzqHNRyAgwHGPg4jEND9q/+KNOqdNSXkkuXe2JL0CSzMIOC1SSpC5TrVGeEIDky9wzSOpmmDlBGV86DjUL5O3woa00DyBTmwIOFLubX1H/qVoTIHvE8U/40vwb9KeJ/d/D6v4dep0TDsjm6lUgNDgUw8Y/kZzaHLM3z5k1N2T6GXgtZ2IsfMQisIsnS51n4oVMkqc4OCHt78tdZRASdvrX0vbocGJsdpMhy57QSQHh3vQnRWyp/YkmhMKIcRBrcFPOeB8sYOnS8/XlJdDKmsq0vg46rEp2d2YX5rc0kMAKvP15V/Au7k6zzBRP3KNgHWLymINBza8deRWhMQ0JBj1DR+OaYR8KTVNoURxBMKFr0K/Q9LflU4hv+DSH+IHP4YfW0F21Gec4lpoVoa6X3kHkYyh1SyKe4jvsbiGYmXPT0NXuHo44R6Dnjc4whoMYfkvSgDGPprNt/PCvKa00jak2OTKSo9dJg4WDgJj2pzzcOiZu85ZSXZuXWSNOIaZQoC8ltZgkDYwDHM2uWRaYGtPSjjpu19E4qG6egDNtXux6ZYqZYVC8eetwagl8NKcYOeWni0SeM0ZNiVM6QijIMRCyPIkkJUXrdPEsjY+g4/lIEqn6QggO5ZhPchtBqT04RDmCzbB0HrM5N9Xhh0BiVXQjGf4qQoLm+BKM1AGHgymiCODC5cCDGMnDPZiNV6kx5LtbeqwVVtS0XX68RdEW3pk4qB+FB0vlUfhH7dXoGcT2cCVCmyMliVDKU938vmETF6kYs1GqR0aMxOWlC7olM9fp43f8U+Y5MeMcBa7PCs0Rvx8W8m6TL+YI3wUYPeVng92bMWKhEHWBAxXR6SkMMSeXFDZBpYTEwQk7G67G3YMsoaaqzPjkk/c7bbl40Lu7YghlUqlDbjZvDxk3ZH/mj3QGYFllSvEqMjETQNWcIrzUqDv+bS7YSa0Yk2oynz9CEFe3GdfC9+mPzj0Hx/PSlod/WBLldAlzkTb10JZdbbPRiTJUGSAwn8G20mUl419Bq4C0G+4TCuybmHpMdoOkMk72nPnVnLLuSqKYZ+FjuCsliaZ82wh1sBTYLCbwzHJ/YiWPnTJJo7k8CoEORfNV2p3VdJai1InCai4lV3FHxXN8TWQp5+EF3CIuYiaguzbPPjBGd7amegSyBJgjEssgWZ/3vvGVkp1WRSy230pxzV7I10FppI/v+/LB++/a4NTQzL2e3h4+vlXbuo2tgWrwbXpVKNF/kVreVoY/0Il6b7tz8ZLfasX15RW8PvWs9eLWmLN8++L4i4fNvJpFtpk29ipAmWuuqQc+dPv2azLAnv0YgqFs7HuQXP0DxGml9MuKMQIDSVLz70vjCbsC0JsIMwHdEWdn6dj9nQFfrZVqtrKezpmSFiEKnPnPNPOyUeUW9eK8l8wzfhHvUaArpoYELAzE4qVpU7el7ZATrMPSzPiTRRxnQnCrwWz/mii/M6j3SKlVS81F9F+VTw85KZtJlozJO7VjtINIh4+iZbBoYqimZSWGDHWkfw9VzZKJiqVMnme/TFNsfmKJYVhYfsqAL77CIj3y3FEuZF7vZFPxsdwCptVDBki5piB4sYLtW5fIbaxxFdsa6+Uhnbhk5wrsG2sf+ZHzlWQWpUiULhAxrELLkPU2tLk7OlHz6E5PvUUp/ATjWEuwG/PQew+GUnk16mkyUk90sbXios7dmCqt5+gwxyi1U3S8UT/ueAvenNdXI4ZG0hLl7JBnsi7DCdFeZ18vj87rguVCx5P4swnPENFOap0+G9DjpIKwExW1sAviu+CCDW2mPT3ZoIVwlcfeYJqSfiHVwk1MQBBXRsy8TzOskNzDSgL4kk9F7gz2LfJwQf9X9U0VXOV+glUvddGBOFfewOCOEWBCTsIU+XXzrRYf/zgja8VrAM0I7Uwr5KsXLdeyGDYg18TJC04amP2AYypttjjsJJ/31q6Pv5g/pwWHCngvPZkotvkM/ebhcOqJI4ck78S+MjhTfcWN9pU+ePU7rBoK44ZsGElGVfwVuuInbj6leSZvVqg9x4/KoWi6bPX6mvpsGVL73RHPFpzy/PRuccUUilBPQGf+goyj1yrOF4xev5q4hT6AkUIvonVjSO9jBlXDLO+bWCxseDnL6iFiNX7qCW4JQe13stf8vgzCuiyYUyN+GLJK4e+JQAw2cY6LOOcDLgyU1mrmBQrWLPGm5peYrzneK9/Preb8cnblPFs6N6mwP3mrNFV3WUlNl2MII++MUMFtxiJW8ck/TiV11KtjvyUUi0NRybkTQP7hM+xJljFnQffF1JyW1RBPuzdFhp69ZCvSGK/Ibctg1EtzTL726gVsKXYCMwvhUq/FEMqlg2UVUdE0w/fVQAmONEhWWsuaJTWnJqJyuPjGC8tOsB+kBDPwEEWr+vUe1lR3S6RpINt1sVXnG9KbSdjHN3lby3AKcR1C5ZwwPM04KZhrwVSH4ivjarioXkzdAi1lMUcqLiMF/8kZnq+5gInsJctfrjRwdqQAi7Z297xGi7E7ZLYJsCDUP82EOgWtw1hWV7Rv7u32QNaYT1lWp4vMflzXa5MOZFl5ZHR3b6hlK7u10f+UwyWZxKZheqrYpefiAGK24IOiuHbked96P1ksnxXZYcuE4BAB9MvFhXNbZ0xZg+0zYY8mYtKgUjevmJNMh7cz2Ud4t2wRDEVXnxS02WhypsQAaDE8NLv/AA" />
These are GGUF quantized versions of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
Coooori/llama_checkpoint-300
|
Coooori
| 2024-01-20T14:40:06Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T14:40:04Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
feelwoo/polyglot-5.8b-koalpaca-v1.1b_test3
|
feelwoo
| 2024-01-20T14:39:18Z
| 2,173
| 0
|
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"feature-extraction",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-11-26T15:44:44Z
|
---
license: apache-2.0
---
- 공부하려고 한번 돌려본내용입니다. HF에 설정하는 부분이 너무 많아서 대충작성하였으니 수정이 필요한부분 있으면 의견부탁드립니다.
|
Arktische/lora-blue-style
|
Arktische
| 2024-01-20T14:33:43Z
| 0
| 0
| null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-20T14:31:25Z
|
---
license: creativeml-openrail-m
---
|
hubro/skin-mole-classification
|
hubro
| 2024-01-20T14:30:20Z
| 0
| 0
|
keras
|
[
"keras",
"image-classification",
"region:us"
] |
image-classification
| 2024-01-16T15:31:26Z
|
---
tags:
- image-classification
library_name: keras
inference: false
---
|
martagrueso/clasificador-muchocine
|
martagrueso
| 2024-01-20T14:27:03Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:dccuchile/albert-base-spanish",
"base_model:finetune:dccuchile/albert-base-spanish",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-20T12:14:30Z
|
---
base_model: dccuchile/albert-base-spanish
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [dccuchile/albert-base-spanish](https://huggingface.co/dccuchile/albert-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2835
- Accuracy: 0.4568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3371 | 0.4284 |
| 1.4336 | 2.0 | 776 | 1.2459 | 0.4503 |
| 1.0895 | 3.0 | 1164 | 1.2835 | 0.4568 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
neoxu999/Llama-2-7b-chat-hf-rhdemo-fine-tuned-adapters
|
neoxu999
| 2024-01-20T14:20:28Z
| 3
| 0
|
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-20T10:55:31Z
|
---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Evan-Lin/dpo_merged
|
Evan-Lin
| 2024-01-20T14:20:01Z
| 5
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-20T07:19:07Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
codersan/Orca2_7b_Enlighten_V2
|
codersan
| 2024-01-20T14:19:31Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Orca-2-7b",
"base_model:adapter:microsoft/Orca-2-7b",
"region:us"
] | null | 2024-01-20T14:19:16Z
|
---
library_name: peft
base_model: microsoft/Orca-2-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Coooori/llama_checkpoint-200
|
Coooori
| 2024-01-20T14:19:11Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T14:19:09Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Nerdofdot/Nerdofdot_roberta-base_TM_FTM
|
Nerdofdot
| 2024-01-20T14:17:01Z
| 5
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-20T14:16:43Z
|
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7975 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 0.4}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2392,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
s3nh/Metabird-7B-GGUF
|
s3nh
| 2024-01-20T14:15:42Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-20T13:10:15Z
|
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/ConvexAI/Metabird-7B).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### inference
User: Tell me story about what is an quantization and what do we need to build.
User 0: Quantum mechanics is one of the two fundamental theories that describe the nature of reality at small scales, along with general relativity which describes how gravity behaves at large scales like galaxies and solar systems.
Quantum mechanics has some very counter intuitive properties compared to what we experience in our daily lives, where objects move smoothly from one place to another and follow Newton's laws, or where the color of an object is due to the light it reflects, not due to particles.
In quantum mechanics, particles can exist at different places simultaneously until you actually observe them. Then they suddenly "collapse
# Original model card
|
shrutivasa/cat-skp
|
shrutivasa
| 2024-01-20T14:14:23Z
| 0
| 1
|
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T14:10:24Z
|
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Cat-skp Dreambooth model trained by shrutivasa following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2220210369
Sample pictures of this concept:


|
vvvictor/AAA
|
vvvictor
| 2024-01-20T14:12:37Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-20T14:06:49Z
|
Non-commercial use.This models are released exclusively for research purposes and is not intended for commercial use.
|
nervps/Dasha
|
nervps
| 2024-01-20T14:04:58Z
| 0
| 0
| null |
[
"music",
"audio-to-audio",
"ru",
"license:unknown",
"region:us"
] |
audio-to-audio
| 2024-01-18T18:26:36Z
|
---
license: unknown
language:
- ru
pipeline_tag: audio-to-audio
tags:
- music
---
|
vicgalle/solarized-18B-dpo-GGUF
|
vicgalle
| 2024-01-20T14:01:22Z
| 34
| 3
| null |
[
"gguf",
"dpo",
"18B",
"GGUF",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"base_model:vicgalle/franken-SOLAR-18B-v1.0",
"base_model:quantized:vicgalle/franken-SOLAR-18B-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-19T15:40:34Z
|
---
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
tags:
- dpo
- 18B
- GGUF
base_model:
- vicgalle/franken-SOLAR-18B-v1.0
---
# solarized-18B-dpo-GGUF
DPO'd from vicgalle/franken-SOLAR-18B-v1.0, a SOLAR-like model upscaled to 18B.
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct. Then, we applied DPO over a high-quality preference dataset.

## Sample generations
```
### System:
Continue the following script for a TV ad, with creative and weird content. Be like a professional writer
### User:
Ziel, a tall, slender Drow woman in a black silk dress enters, her blood-red eyes fixed on Arslan. She says with an excited voice, "Oh, you sweet little thing.". She chuckles, though her smile does not reach her eyes. "Look at you. You're just so precious.". The elf walks over and lifts his chin, her sharp nails prickling Arslan's skin. "Human. You belong to me now. Legally. Nobody will come for you, after the sum I paid.". She tries to look composed, but her ragged breathing and trembling hands give away her excitement. "W-we're going to have some wonderful years together, you and I".
```
> (Voiceover: Energetic and dramatic tone)
Scene cuts to an office setting where Arslan sits nervously at a desk, surrounded by various documents and contracts, while Ziel smirks from her chair in the background.
> Arslan (voiceover): "And then, it happened."
> (Fast-cut to an action shot of Ziel's hand signing one of those documents)
Ziel (off-screen): "All done, my pretty little asset."
> (Back to Arlan's anxious face)
Arslan (voiceover): "I had no idea what I had just signed away."
> (Scene fades into a futuristic cityscape filled with neon lights and towering skyscrapers)
> (SFX: Synth music swells in the background)
> Ziel (Voiceover): "You're gonna love your new life, under my thumb."
> (We pan to Arlan walking on a rooftop in his stylish suit and tie while a group of Ziel's minions follow from behind. They are all wearing masks with the same demonic smiles)
> Arslan (voiceover): "But then, I discovered something strange."
> (Scene cuts to Arslan discovering hidden clauses in the contracts stating, 'Sold to the highest bidder' and 'Forever bound to Ziel, for whatever her twisted heart desires.')
> (Cut to close-up of a distraught Arlan)
> Aslan (voiceover): "It turns out I was just a pawn in her game."
> (We see a montage of the scenes where Ziel laughs sinisterly, her minions watching over Arslan and then back to an anxious human.)
> Ziel (Voiceover): "Oh how the roles have changed. You should've just run away when you could."
> (SFX: Evil cackling)
> Aslan (voiceover, determined tone): "But guess what? I won't let her win. I will find a loophole in those twisted contracts, and no amount of money can buy back your freedom."
> (Final shot is Arslan smashing the contract to pieces as he looks up with determination in his eyes.)
> End credits:
"Fight for your freedom. No matter what it takes. Free yourself from the chains of tyranny."
> (SFX: Heroic music swells, followed by a clap of thunder and lightning strike)
> Voiceover (closing tagline): "Break free with Freedom Fighters. Fight for your freedom."
> (Scene fades out to the company logo appearing on screen accompanied by bright lights radiating from it.)
> Note: The script must contain elements of fantasy and a hint of humor, making the ad appealing and engaging. Also, remember that this ad targets both humans and non-human species who might resonate with the situation portrayed in the ad.
### Prompt template
The chat template is included in tokenizer_config.json, in the unquantized repo.
|
Coooori/llama_checkpoint-100
|
Coooori
| 2024-01-20T13:58:19Z
| 0
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-20T13:58:17Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
hsndldl/q-Taxi-v3-5x5
|
hsndldl
| 2024-01-20T13:58:07Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-20T13:58:05Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-5x5
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hsndldl/q-Taxi-v3-5x5", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
DANISHFAYAZNAJAR/my_model_en_to_ur
|
DANISHFAYAZNAJAR
| 2024-01-20T13:51:14Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-20T13:51:04Z
|
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_model_en_to_ur
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_model_en_to_ur
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2587
- Bleu: 0.0
- Gen Len: 18.2573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.3014 | 1.0 | 1250 | 0.2716 | 0.0 | 18.4279 |
| 0.2854 | 2.0 | 2500 | 0.2639 | 0.0 | 18.3976 |
| 0.2772 | 3.0 | 3750 | 0.2610 | 0.0 | 18.1718 |
| 0.2752 | 4.0 | 5000 | 0.2591 | 0.0 | 18.255 |
| 0.2766 | 5.0 | 6250 | 0.2587 | 0.0 | 18.2573 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
amc-madalin/amc-flappy-ai-a-20
|
amc-madalin
| 2024-01-20T13:48:02Z
| 0
| 0
| null |
[
"en",
"license:mit",
"region:us"
] | null | 2024-01-20T10:08:28Z
|
---
license: mit
language:
- en
---
# README for Flappy-AI with Reinforcement Learning
Model for the repository [flappy-ai](https://github.com/amc-madalin/flappy-ai), an project that utilizes reinforcement learning, to train an AI agent to master the game of Flappy Bird. This model is the 20th episode for the following hyperparameters:
# File: config.yml
```
game_screen:
width: 800
height: 600
game_mechanics:
gravity: 5
flap_strength: 20
pipe_speed: 2
pipe_width: 100
pipe_gap: 200
character_size: 50
collision_penalty: -1 # Penalty for hitting a pipe
pass_reward: 1 # Reward for successfully passing a pipe
height_penalty: -0.1 # Penalty for flying too high or too low
initial_positions:
character_x: 350
character_y: 250
pipe_x: 800
pipe_height_min: 100
pipe_height_max: 400
ai_config:
epsilon: 0.3
epsilon_min: 0.01
epsilon_decay: 0.98
gamma: 0.1
learning_rate: 0.0001
batch_size: 32
buffer_size: 5000
num_episodes: 500
other_settings:
random_seed: 42
model:
save_path: ./experiments/
load_path: ~
```
## Project Structure
- `ai/`: This directory houses the core AI components, including the Q-network for decision making, the replay buffer for learning from past actions, and the training logic essential for the agent's learning process.
- `assets/`: Here, you'll find all the graphical and audio assets utilized in the project, enriching the game's visual and auditory experience.
- `configs/`: Contains various configuration files that allow customization and fine-tuning of the AI agent and the game environment.
- `environment.yml`: A comprehensive list of dependencies required for the project, ensuring consistent setup across different environments.
- `experiments/`: This section includes experimental code and data, showcasing the trials and tests conducted during the development process.
- `game/`: Contains the game's logic and rendering code, the heart of the Flappy Bird game that the AI interacts with.
- `model/`: Here lies the trained AI model, a testament to the learning and adaptation achieved through reinforcement learning.
- `runs/`: Stores outputs from various training runs, providing insights into the AI's performance and learning progression.
- `utils/`: A collection of utility scripts and modules that support various functionalities across the project.
- `train.py`: The primary script used to kickstart the AI agent's training process.
- `test_game.py`: A testing script to run the game independently of the AI agent, useful for debugging and game playthroughs.
## Installation
To set up this project on your local machine, follow these steps:
1. Clone the repository:
```sh
git clone https://github.com/amc-madalin/flappy-ai.git
```
2. Navigate to the project directory:
```sh
cd flappy-ai
```
3. Install the required dependencies:
```sh
conda env create -f environment.yml
```
## Usage
Once the installation is complete, you can start exploring Flappy-AI.
1. Activate the project environment:
```sh
conda activate flappy-ai
```
2. To begin training the AI agent, execute:
```sh
python train.py
```
3. To test the game mechanics without the AI agent, run:
```sh
python test_game.py
```
## License
This project is released under the MIT License, promoting open-source collaboration and sharing.
|
stablediffusionapi/afrodite-xl-v2
|
stablediffusionapi
| 2024-01-20T13:42:47Z
| 26
| 3
|
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T13:41:03Z
|
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference
%20photography%20of%20a%20beautiful,%20(exhausted_1.2),%20(sweaty_1.1),%20(shor.jpeg)
## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "afrodite-xl-v2"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/afrodite-xl-v2)
Model link: [View model](https://modelslab.com/models/afrodite-xl-v2)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "afrodite-xl-v2",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
k0T0z/ppo-SnowballTarget
|
k0T0z
| 2024-01-20T13:32:42Z
| 2
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-20T09:38:48Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: k0T0z/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
baltop/open_300
|
baltop
| 2024-01-20T13:23:01Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openchat/openchat-3.5-1210",
"base_model:adapter:openchat/openchat-3.5-1210",
"region:us"
] | null | 2024-01-20T13:22:38Z
|
---
library_name: peft
base_model: openchat/openchat-3.5-1210
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ntc-ai/SDXL-LoRA-slider.drinking-a-coke-in-the-1950s
|
ntc-ai
| 2024-01-20T13:22:55Z
| 9
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-20T13:22:47Z
|
---
language:
- en
thumbnail: "images/evaluate/drinking a coke in the 1950s.../drinking a coke in the 1950s_17_3.0.png"
widget:
- text: drinking a coke in the 1950s
output:
url: images/drinking a coke in the 1950s_17_3.0.png
- text: drinking a coke in the 1950s
output:
url: images/drinking a coke in the 1950s_19_3.0.png
- text: drinking a coke in the 1950s
output:
url: images/drinking a coke in the 1950s_20_3.0.png
- text: drinking a coke in the 1950s
output:
url: images/drinking a coke in the 1950s_21_3.0.png
- text: drinking a coke in the 1950s
output:
url: images/drinking a coke in the 1950s_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "drinking a coke in the 1950s"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - drinking a coke in the 1950s (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/drinking a coke in the 1950s_17_-3.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_17_0.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_17_3.0.png" width=256 height=256 /> |
| <img src="images/drinking a coke in the 1950s_19_-3.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_19_0.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_19_3.0.png" width=256 height=256 /> |
| <img src="images/drinking a coke in the 1950s_20_-3.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_20_0.0.png" width=256 height=256 /> | <img src="images/drinking a coke in the 1950s_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
drinking a coke in the 1950s
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.drinking-a-coke-in-the-1950s', weight_name='drinking a coke in the 1950s.safetensors', adapter_name="drinking a coke in the 1950s")
# Activate the LoRA
pipe.set_adapters(["drinking a coke in the 1950s"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, drinking a coke in the 1950s"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
brenomatos/xlm-roberta-base-language-detection
|
brenomatos
| 2024-01-20T13:17:51Z
| 4
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:papluca/xlm-roberta-base-language-detection",
"base_model:finetune:papluca/xlm-roberta-base-language-detection",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T18:20:58Z
|
---
license: mit
base_model: papluca/xlm-roberta-base-language-detection
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-language-detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-language-detection
This model is a fine-tuned version of [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0203
- Accuracy: 0.9961
- F1: 0.9961
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.193 | 1.0 | 1094 | 0.0216 | 0.9959 | 0.9959 |
| 0.0094 | 2.0 | 2188 | 0.0207 | 0.9961 | 0.9961 |
| 0.0036 | 3.0 | 3282 | 0.0203 | 0.9961 | 0.9961 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
beibeif/poca-SoccerTwos
|
beibeif
| 2024-01-20T13:15:57Z
| 5
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-01-20T05:03:25Z
|
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: beibeif/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cwchoi/whisper_medium_tele_j04
|
cwchoi
| 2024-01-20T13:08:58Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ko",
"dataset:aihub.or.kr",
"base_model:cwchoi/whisper_medium_tele",
"base_model:finetune:cwchoi/whisper_medium_tele",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-20T01:53:05Z
|
---
language:
- ko
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- aihub.or.kr
base_model: cwchoi/whisper_medium_tele
model-index:
- name: whisper_medium_tele_j04
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper_medium_tele_j04
This model is a fine-tuned version of [cwchoi/whisper_medium_tele](https://huggingface.co/cwchoi/whisper_medium_tele) on the telephone dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2685
- Cer: 98.1710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3363 | 0.2 | 1000 | 0.3179 | 97.6261 |
| 0.2821 | 0.4 | 2000 | 0.2957 | 96.6862 |
| 0.2838 | 0.6 | 3000 | 0.2787 | 96.5971 |
| 0.2625 | 0.8 | 4000 | 0.2685 | 98.1710 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.