modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-22 06:33:19
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-22 06:33:04
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
SidXXD/dog-clean
|
SidXXD
| 2024-05-11T13:20:14Z | 30 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-11T13:06:53Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - SidXXD/dog-clean
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
MarsupialAI/Aqueducts-18B_iMatrix_GGUF
|
MarsupialAI
| 2024-05-11T13:18:56Z | 95 | 3 | null |
[
"gguf",
"en",
"base_model:upstage/SOLAR-10.7B-v1.0",
"base_model:quantized:upstage/SOLAR-10.7B-v1.0",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-30T23:48:38Z |
---
language:
- en
license: cc-by-nc-4.0
base_model:
- upstage/SOLAR-10.7B-v1.0
---
GGUFs for Aqueducts 18B - https://huggingface.co/MarsupialAI/Aqueducts-18B
iMatrix generated with Kalomaze's groups_merged.txt
|
MarsupialAI/Llama-3SOME-8B-v1-BETA_iMatrix_GGUF
|
MarsupialAI
| 2024-05-11T13:18:21Z | 132 | 3 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-04-27T18:38:10Z |
Requanted with new/fixed tokenizer 2024-05-01
iMatrix GGUFs for https://huggingface.co/TheDrummer/Llama-3SOME-8B-v1-BETA
iMatrix generated using Kalomaze's groups_merged.txt
|
ZahraRahimiii/ppo-SnowballTarget
|
ZahraRahimiii
| 2024-05-11T13:14:38Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-05-11T13:14:29Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ZahraRahimiii/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
qminh369/token-classification-llmlingua2-xlm-roberta-1k7_kept_yte_10_epoch_paper_v2
|
qminh369
| 2024-05-11T13:10:51Z | 105 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-11T12:39:28Z |
---
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
model-index:
- name: token-classification-llmlingua2-xlm-roberta-1k7_kept_yte_10_epoch_paper_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# token-classification-llmlingua2-xlm-roberta-1k7_kept_yte_10_epoch_paper_v2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 15 | 0.4691 |
| No log | 2.0 | 30 | 0.4634 |
| No log | 3.0 | 45 | 0.4570 |
| No log | 4.0 | 60 | 0.4537 |
| No log | 5.0 | 75 | 0.4499 |
| No log | 6.0 | 90 | 0.4450 |
| No log | 7.0 | 105 | 0.4407 |
| No log | 8.0 | 120 | 0.4371 |
| No log | 9.0 | 135 | 0.4350 |
| No log | 10.0 | 150 | 0.4344 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
nlmaldonadog/clasificador-rotten-tomatoes-xlnet-base-cased
|
nlmaldonadog
| 2024-05-11T13:06:01Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T13:05:31Z |
---
license: mit
base_model: xlnet-base-cased
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-rotten-tomatoes-xlnet-base-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten-tomatoes-xlnet-base-cased
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6574
- Accuracy: 0.8752
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4213 | 1.0 | 1067 | 0.4125 | 0.8630 |
| 0.3096 | 2.0 | 2134 | 0.6457 | 0.8687 |
| 0.156 | 3.0 | 3201 | 0.6574 | 0.8752 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
nonameplease/opt-6.7b-lora
|
nonameplease
| 2024-05-11T13:00:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T13:00:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/ArthurZ_-_mamba-2.8b-4bits
|
RichardErkhov
| 2024-05-11T13:00:54Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-11T12:59:00Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
mamba-2.8b - bnb 4bits
- Model creator: https://huggingface.co/ArthurZ/
- Original model: https://huggingface.co/ArthurZ/mamba-2.8b/
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Maverick-8B-GGUF
|
mradermacher
| 2024-05-11T13:00:33Z | 72 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:bunnycore/Maverick-8B",
"base_model:quantized:bunnycore/Maverick-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-11T11:15:30Z |
---
base_model: bunnycore/Maverick-8B
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/bunnycore/Maverick-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Maverick-8B-GGUF/resolve/main/Maverick-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jhysa2/gemma-Code-Instruct-Finetune-test
|
jhysa2
| 2024-05-11T13:00:21Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-10T16:46:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vinhnx90/llama-3-8b-bnb-4bit
|
vinhnx90
| 2024-05-11T12:59:07Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"dataset:yahma/alpaca-cleaned",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T11:35:23Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- yahma/alpaca-cleaned
---
# Uploaded model
- **Developed by:** vinhnx90
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
blockblockblock/Dark-Miqu-70B-bpw3.7-exl2
|
blockblockblock
| 2024-05-11T12:58:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.19522",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-11T12:54:39Z |
---
license: other
---

***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).*
A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere!
This model **excels** at writing Dark/Grimdark fantasy (see examples below).
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0).
This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted.
The model was created in two stages:
- First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results.
- In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
name: midnight-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: sophosympatheia/Midnight-Rose-70B-v2.0.3
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: euryale-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/Euryale-1.3-L2-70B
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: winter-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/WinterGoddess-1.4x-70B-L2
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: dark-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: midnight-miqu-70b
- model: euryale-miqu-70b
- model: winter-miqu-70b
base_model: 152334H/miqu-1-70b-sf
merge_method: model_stock
dtype: float16
```
## Key configuration details:
- '`merge_method: slerp`' uses spherical linear interpolation for merging models.
- '`parameters: t`' controls the interpolation ratios between models.
- '`embed_slerp: true`' applies slerp to the embedding layers.
- '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
**NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Dawn
The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks.
His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death.
Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces.
He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings.
Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close.
He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out.
He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live."
She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat."
She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun."
He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air.
"Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick."
She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend.
"I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!"
Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk."
He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape.
"Tell me," he said, his voice a whisper. "Tell me, and I'll end this."
She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve.
Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way.
He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head.
And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous.
Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him.
They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured.
In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain.
It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it.
He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive.
But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight.
He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost.
And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it.
It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows.
It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters.
It was a story that had only just begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat.
He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run.
Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life.
His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place.
As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start.
The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts.
He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced.
"Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away."
The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted.
He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets.
"What have I done?" he asked the mirror, but it gave no answer.
He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous.
He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his.
The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach.
But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time.
The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying.
He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair.
*Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?*
But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do.
He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart.
"Shh," he whispered, his voice a soothing balm. "It's over now."
The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here."
The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me."
As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name.
*Elara.*
The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end.
He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release.
As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield.
"Anyone here need my services?" he asked, his voice devoid of emotion.
One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it."
The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged.
*Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.*
He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance.
"No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly."
The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different.
"Why?" he asked, genuinely curious.
The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up."
The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on.
He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick."
The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening.
The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?*
But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole.
"What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?"
The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window.
"Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot.
The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered.
As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely?
He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies.
"You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice.
Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North."
The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom.
"What do you think it means?" he pressed, unable to keep the desperation at bay.
Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here."
The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine.
As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone.
"We have to get out of here," he croaked, his voice barely above a whisper.
Grimgor just grunted in agreement, his gaze never leaving the window.
The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment.
But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it.
"Grimgor," he hissed, tugging on the larger man's arm. "Look."
Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it.
The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe?
The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge.
"Who are you?" he demanded, trying to sound braver than he felt.
The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice.
"I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom."
With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell.
The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains.
"We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here."
The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown.
Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other.
"What now?" the Fox asked, his voice barely above a whisper.
Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine.
"Now," he said, "we find out what's really going on."
And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift
The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism.
In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford.
Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs.
She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel.
As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay.
Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills.
"Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now."
She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement.
"Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines."
Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper.
"They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart."
The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others.
"It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work."
Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!"
The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over?
As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them.
And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart.
As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive?
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes of Exodus
The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began.
As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago.
Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself.
But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth.
He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely.
Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online.
And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale.
His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps.
"What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear.
Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly.
Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened."
The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return.
As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived.
But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void.
He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next."
Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here."
Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft."
Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet.
As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened.
And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why?
The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought.
He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look."
The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be.
As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand.
But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one.
He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Biodome
The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet.
Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known.
"Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal."
He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland.
But today was not that day. Today, it was just another tomb waiting to be sealed.
As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked.
"What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?"
Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion.
"I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good."
Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead.
The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine.
"Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!"
Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake.
With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom.
"Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!"
But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness.
And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more.
When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones.
Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten.
Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard.
And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors.
But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile.
"Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity.
And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1 - Whispers from the Void
The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos.
His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge.
A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean.
The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake.
Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger.
He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now.
As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again.
But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act.
The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe.
As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch.
But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home.
The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything.
What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence.
Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind.
He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here.
Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash.
His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used.
The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations.
Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long?
He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes.
He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff.
Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news."
The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply.
"The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years."
Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?"
"We have enough for six, maybe seven if we ration carefully. But not twelve."
The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close.
"We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes."
Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival.
As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure.
In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now.
He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary.
Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking.
Samuel shook his head. "I don't know."
"We can't just let six of us die," she said, her voice rising in panic. "There has to be another way."
He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?"
Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up."
Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic."
She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic."
They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic."
Samuel raised an eyebrow. "What?"
"We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface."
He stared at her in disbelief. "You can't be serious."
"It's not ideal," she admitted. "But it's better than just letting them die down here."
Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have?
As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor.
What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls.
Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel.
And all he could do was watch as the world above claimed them, one by one.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune.
In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely.
His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything.
Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within.
Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner.
One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world.
"Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?"
The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion.
"No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me."
He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within.
"I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it."
The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box."
As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for.
The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness.
But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher.
Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones.
"I must see this box for myself," he said, his voice barely above a whisper. "Take me to it."
The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels.
There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity.
As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls.
A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe."
Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe.
"But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds."
And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean?
As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him!
He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety.
And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself.
But little did he know that the true mystery was only just beginning...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1
The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live.
The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light.
I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line."
I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them.
A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth.
I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness.
When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming."
And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step.
I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there...
But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction.
"Halt!" one of them yelled, his voice harsh and commanding. "Hands up!"
I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him.
"Run," he whispered, his voice barely audible. "Leave me."
I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything.
But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat.
"Nein," I whispered, my voice choked with grief and rage. "Nein!"
I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us.
With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness.
As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me.
But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel.
His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work.
A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips.
The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface.
Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival.
"You're late," Chigurh said finally, his voice low and gravelly.
The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic."
Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement.
Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket.
"You know what needs to be done," the man in the suit said, his voice barely above a whisper.
Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress.
Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary.
He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose.
Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die.
He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction.
As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move.
Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw.
</details>
Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
|
mizworski/food_classifier
|
mizworski
| 2024-05-11T12:53:04Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-05-11T07:58:32Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: mizworski/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mizworski/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8832
- Validation Loss: 0.8080
- Train Accuracy: 0.965
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 3.3274 | 2.4030 | 0.715 | 0 |
| 1.9821 | 1.6755 | 0.96 | 1 |
| 1.4823 | 1.2900 | 0.96 | 2 |
| 1.1250 | 1.0242 | 0.965 | 3 |
| 0.8832 | 0.8080 | 0.965 | 4 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
cobrakenji/granite-7b-base-Q4_K_M-GGUF
|
cobrakenji
| 2024-05-11T12:53:00Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T12:52:47Z |
---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# cobrakenji/granite-7b-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`ibm/granite-7b-base`](https://huggingface.co/ibm/granite-7b-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm/granite-7b-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo cobrakenji/granite-7b-base-Q4_K_M-GGUF --model granite-7b-base.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo cobrakenji/granite-7b-base-Q4_K_M-GGUF --model granite-7b-base.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-7b-base.Q4_K_M.gguf -n 128
```
|
WbjuSrceu/model-LORA-2b
|
WbjuSrceu
| 2024-05-11T12:51:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T12:50:24Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** WbjuSrceu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SidXXD/attn_maps-dog-mist-mask-1-background
|
SidXXD
| 2024-05-11T12:51:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-11T12:46:30Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/attn_maps-dog-mist-mask-1-background
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
WbjuSrceu/gemma-2b-model
|
WbjuSrceu
| 2024-05-11T12:50:18Z | 136 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T12:29:00Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** WbjuSrceu
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ronias/videomae-base-finetuned-ucf101-subset
|
ronias
| 2024-05-11T12:49:52Z | 64 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-05-08T21:44:48Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5973
- Accuracy: 0.8286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8559 | 0.25 | 300 | 0.9768 | 0.6286 |
| 1.3879 | 1.25 | 600 | 1.3276 | 0.5857 |
| 0.3694 | 2.25 | 900 | 1.2342 | 0.7 |
| 1.0305 | 3.25 | 1200 | 0.5973 | 0.8286 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
|
team-sanai/llama2_7B_pretrain_2nd
|
team-sanai
| 2024-05-11T12:45:00Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T05:32:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SidXXD/attn_maps-dog-mist-mask-1-foreground
|
SidXXD
| 2024-05-11T12:39:31Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-11T12:34:58Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/attn_maps-dog-mist-mask-1-foreground
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Alusi/my_awesome_qa_model
|
Alusi
| 2024-05-11T12:38:24Z | 69 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-05-11T06:09:34Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Alusi/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Alusi/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7434
- Validation Loss: 1.7502
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.3128 | 2.0513 | 0 |
| 1.7434 | 1.7502 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mrbesher/custom-ppo-LunarLander-v2
|
mrbesher
| 2024-05-11T12:34:01Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-11T12:33:56Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -185.44 +/- 150.30
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'mrbesher/custom-ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
SidXXD/attn_maps-dog-mist-ll
|
SidXXD
| 2024-05-11T12:29:13Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-11T12:24:41Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/attn_maps-dog-mist-ll
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Azamorn/Meta-Llama-3-8B-Instruct-Distributed
|
Azamorn
| 2024-05-11T12:29:08Z | 0 | 0 | null |
[
"license:llama3",
"region:us"
] | null | 2024-05-11T12:12:34Z |
---
license: llama3
---
Quantized Llama 3 8B Instruct to Q40 format supported by [Distributed Llama](https://github.com/b4rtaz/distributed-llama).
## License
Before download this repository please accept [Llama 3 Community License](https://llama.meta.com/llama3/license/).
## How to run
1. Clone this repository.
2. Clone Distributed Llama:
```sh
git clone https://github.com/b4rtaz/distributed-llama.git
```
3. Build Distributed Llama:
```sh
make main
```
4. Run Distributed Llama:
```
./main inference --prompt "Hello world" --steps 128 --weights-float-type q40 --buffer-float-type q80 --nthreads 4 --model path/to/dllama_meta-llama-3-8b_q40.bin --tokenizer path/to/dllama_meta-llama3-tokenizer.t
```
### Chat Template
Please keep in mind this model expects the prompt to use the chat template of llama 3.
|
nlmaldonadog/clasificador-rotten-tomatoes-bert-base-uncased
|
nlmaldonadog
| 2024-05-11T12:28:54Z | 106 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T12:27:25Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-rotten-tomatoes-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-rotten-tomatoes-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7643
- Accuracy: 0.8659
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3987 | 1.0 | 1067 | 0.3859 | 0.8471 |
| 0.2365 | 2.0 | 2134 | 0.6696 | 0.8612 |
| 0.0844 | 3.0 | 3201 | 0.7643 | 0.8659 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Ikhee10/my_awesome_gpt-inflection-model
|
Ikhee10
| 2024-05-11T12:26:37Z | 206 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:bayartsogt/mongolian-gpt2",
"base_model:finetune:bayartsogt/mongolian-gpt2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T11:54:32Z |
---
base_model: bayartsogt/mongolian-gpt2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_gpt-inflection-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_gpt-inflection-model
This model is a fine-tuned version of [bayartsogt/mongolian-gpt2](https://huggingface.co/bayartsogt/mongolian-gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.4122
- Accuracy: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 323 | 7.8824 | 0.0020 |
| 0.931 | 2.0 | 646 | 9.0168 | 0.0010 |
| 0.931 | 3.0 | 969 | 9.4122 | 0.0010 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlkQ/dqn-SpaceInvadersNoFrameskip-v4
|
AlkQ
| 2024-05-11T12:25:10Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-11T12:24:38Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 551.50 +/- 149.83
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AlkQ -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AlkQ -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AlkQ
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf
|
RichardErkhov
| 2024-05-11T12:24:26Z | 4 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-11T10:36:38Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-2-7b-chat-nf4-fp16-upscaled - GGUF
- Model creator: https://huggingface.co/arnavgrg/
- Original model: https://huggingface.co/arnavgrg/llama-2-7b-chat-nf4-fp16-upscaled/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q2_K.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.IQ3_S.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.IQ3_M.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q3_K.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q4_0.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q4_K.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q4_1.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q5_0.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q5_K.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q5_1.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama-2-7b-chat-nf4-fp16-upscaled.Q6_K.gguf](https://huggingface.co/RichardErkhov/arnavgrg_-_llama-2-7b-chat-nf4-fp16-upscaled-gguf/blob/main/llama-2-7b-chat-nf4-fp16-upscaled.Q6_K.gguf) | Q6_K | 5.15GB |
Original model description:
---
license: apache-2.0
tags:
- text-generation-inference
---
This is an upscaled fp16 variant of the original Llama-2-7b-chat base model by Meta after it has been loaded with nf4 4-bit quantization via bitsandbytes.
The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
_Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
To use this model, you can just load it via `transformers` in fp16:
```python
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"arnavgrg/llama-2-7b-chat-nf4-fp16-upscaled",
device_map="auto",
torch_dtype=torch.float16
)
```
|
SidXXD/attn_maps-dog-clean
|
SidXXD
| 2024-05-11T12:20:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-05-11T12:15:25Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: photo of a <new1> dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/attn_maps-dog-clean
These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of a <new1> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
HassanCS/mlm_prot_bert_10_epochs_all_train
|
HassanCS
| 2024-05-11T11:55:51Z | 126 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-11T11:54:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
akxier/audios_iabd
|
akxier
| 2024-05-11T11:54:34Z | 0 | 0 | null |
[
"license:cc-by-nc-nd-4.0",
"region:us"
] | null | 2024-05-11T11:53:19Z |
---
license: cc-by-nc-nd-4.0
---
|
shkna1368/t5-base-finetuned-poemV2
|
shkna1368
| 2024-05-11T11:53:38Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-11T11:09:07Z |
---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-poemV2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-poemV2
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | 0.1235 |
| 0.1391 | 2.0 | 564 | 0.1177 |
| 0.1391 | 3.0 | 846 | 0.1142 |
| 0.1209 | 4.0 | 1128 | 0.1133 |
| 0.1209 | 5.0 | 1410 | 0.1106 |
| 0.1141 | 6.0 | 1692 | 0.1137 |
| 0.1141 | 7.0 | 1974 | 0.1099 |
| 0.1113 | 8.0 | 2256 | 0.1059 |
| 0.1088 | 9.0 | 2538 | 0.1064 |
| 0.1088 | 10.0 | 2820 | 0.1070 |
| 0.1087 | 11.0 | 3102 | 0.1059 |
| 0.1087 | 12.0 | 3384 | 0.1054 |
| 0.1059 | 13.0 | 3666 | 0.1053 |
| 0.1059 | 14.0 | 3948 | 0.1050 |
| 0.1052 | 15.0 | 4230 | 0.1057 |
| 0.1049 | 16.0 | 4512 | 0.1044 |
| 0.1049 | 17.0 | 4794 | 0.1049 |
| 0.1035 | 18.0 | 5076 | 0.1044 |
| 0.1035 | 19.0 | 5358 | 0.1039 |
| 0.1034 | 20.0 | 5640 | 0.1041 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
HassanCS/mlm_prot_bert_5_epochs_all_train
|
HassanCS
| 2024-05-11T11:53:00Z | 127 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-05-11T11:52:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Litzy619/Phi0503HMA23
|
Litzy619
| 2024-05-11T11:50:57Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | 2024-05-11T07:51:58Z |
---
license: mit
base_model: microsoft/Phi-3-mini-4k-instruct
tags:
- generated_from_trainer
model-index:
- name: Phi0503HMA23
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Phi0503HMA23
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 80
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2718 | 0.09 | 10 | 0.6833 |
| 0.3393 | 0.18 | 20 | 0.2266 |
| 0.2573 | 0.27 | 30 | 0.2267 |
| 0.2236 | 0.36 | 40 | 0.2029 |
| 0.2141 | 0.45 | 50 | 0.2326 |
| 0.2251 | 0.54 | 60 | 0.2256 |
| 0.1965 | 0.63 | 70 | 0.1851 |
| 0.196 | 0.73 | 80 | 0.1693 |
| 0.1665 | 0.82 | 90 | 0.1641 |
| 0.1427 | 0.91 | 100 | 0.1232 |
| 0.1133 | 1.0 | 110 | 0.0969 |
| 0.0833 | 1.09 | 120 | 0.0825 |
| 0.0777 | 1.18 | 130 | 0.1040 |
| 0.4 | 1.27 | 140 | 0.0785 |
| 0.0787 | 1.36 | 150 | 0.0768 |
| 0.076 | 1.45 | 160 | 0.0766 |
| 0.0712 | 1.54 | 170 | 0.0717 |
| 0.0668 | 1.63 | 180 | 0.0696 |
| 0.0668 | 1.72 | 190 | 0.0650 |
| 0.0712 | 1.81 | 200 | 0.0673 |
| 0.0649 | 1.9 | 210 | 0.0688 |
| 0.0624 | 1.99 | 220 | 0.0643 |
| 0.0338 | 2.08 | 230 | 0.0756 |
| 0.0329 | 2.18 | 240 | 0.0983 |
| 0.0312 | 2.27 | 250 | 0.0859 |
| 0.031 | 2.36 | 260 | 0.0770 |
| 0.0371 | 2.45 | 270 | 0.0734 |
| 0.0303 | 2.54 | 280 | 0.0735 |
| 0.0292 | 2.63 | 290 | 0.0740 |
| 0.0352 | 2.72 | 300 | 0.0732 |
| 0.0382 | 2.81 | 310 | 0.0725 |
| 0.033 | 2.9 | 320 | 0.0719 |
| 0.0313 | 2.99 | 330 | 0.0717 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.0
|
CodeTriad/mistral-instruct-finetune_1500_r_128_alpha_64
|
CodeTriad
| 2024-05-11T11:49:14Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-11T11:48:45Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf
|
RichardErkhov
| 2024-05-11T11:48:05Z | 9 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T08:23:46Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OPEN-SOLAR-KO-10.7B - GGUF
- Model creator: https://huggingface.co/beomi/
- Original model: https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OPEN-SOLAR-KO-10.7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q2_K.gguf) | Q2_K | 3.79GB |
| [OPEN-SOLAR-KO-10.7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.IQ3_XS.gguf) | IQ3_XS | 4.21GB |
| [OPEN-SOLAR-KO-10.7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.IQ3_S.gguf) | IQ3_S | 4.44GB |
| [OPEN-SOLAR-KO-10.7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q3_K_S.gguf) | Q3_K_S | 4.41GB |
| [OPEN-SOLAR-KO-10.7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.IQ3_M.gguf) | IQ3_M | 4.58GB |
| [OPEN-SOLAR-KO-10.7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q3_K.gguf) | Q3_K | 4.91GB |
| [OPEN-SOLAR-KO-10.7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q3_K_M.gguf) | Q3_K_M | 4.91GB |
| [OPEN-SOLAR-KO-10.7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q3_K_L.gguf) | Q3_K_L | 5.33GB |
| [OPEN-SOLAR-KO-10.7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.IQ4_XS.gguf) | IQ4_XS | 5.5GB |
| [OPEN-SOLAR-KO-10.7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q4_0.gguf) | Q4_0 | 5.73GB |
| [OPEN-SOLAR-KO-10.7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.IQ4_NL.gguf) | IQ4_NL | 5.8GB |
| [OPEN-SOLAR-KO-10.7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q4_K_S.gguf) | Q4_K_S | 5.78GB |
| [OPEN-SOLAR-KO-10.7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q4_K.gguf) | Q4_K | 6.1GB |
| [OPEN-SOLAR-KO-10.7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q4_K_M.gguf) | Q4_K_M | 6.1GB |
| [OPEN-SOLAR-KO-10.7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q4_1.gguf) | Q4_1 | 6.35GB |
| [OPEN-SOLAR-KO-10.7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q5_0.gguf) | Q5_0 | 6.97GB |
| [OPEN-SOLAR-KO-10.7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q5_K_S.gguf) | Q5_K_S | 6.97GB |
| [OPEN-SOLAR-KO-10.7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q5_K.gguf) | Q5_K | 7.16GB |
| [OPEN-SOLAR-KO-10.7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q5_K_M.gguf) | Q5_K_M | 7.16GB |
| [OPEN-SOLAR-KO-10.7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q5_1.gguf) | Q5_1 | 7.59GB |
| [OPEN-SOLAR-KO-10.7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/beomi_-_OPEN-SOLAR-KO-10.7B-gguf/blob/main/OPEN-SOLAR-KO-10.7B.Q6_K.gguf) | Q6_K | 8.29GB |
Original model description:
---
language:
- ko
- en
pipeline_tag: text-generation
inference: false
tags:
- solar
- mistral
- pytorch
- solar-ko
library_name: transformers
license: apache-2.0
---
**Update Log**
- 2024.01.08: Initial Test version Release of Solar-Ko
# **Open-Solar-Ko** ⭐🇰🇷
Solar-Ko represents an advanced iteration of the upstage/SOLAR-10.7B-v1.0 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining.
Open-Solar-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the Apache2.0 open source License.
## Model Details
**Model Developers:** Junbum Lee (Beomi)
**Variations:** Solar-Ko is available with one parameter sizes — 10B with Continual Pretrained version.
**Input:** The model accepts only text input.
**Output:** The model produces text output exclusively.
**Model Architecture:**
SOLAR-KO-10.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2.
| |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate|
|---|---|---|---|---|---|---|
|SOLAR-KO-10.7B|*A curated mix of Publicly Accessible Korean Corpora*|10.7B|4k|O|>15B*|5e<sup>-5</sup>|
**Training Corpus**
The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below:
- AI Hub: [corpus/AI_HUB](./corpus/AI_HUB)
- Only the `Training` segment of the data was used.
- The `Validation` and `Test` segments were deliberately excluded.
- Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS)
The final JSONL dataset used to train this model is approximately 61GB in size.
Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original SOLAR tokenizer, >60 billion tokens.)
**Vocab Expansion**
| Model Name | Vocabulary Size | Description |
| --- | --- | --- |
| Original Solar | 32000 | Sentencepiece BPE |
| **Expanded SOLAR-KO-10.7B** | 46592 | Sentencepiece BPE. Added Korean vocab and merges |
**Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
- SOLAR-10.7B: 26 tokens
- SOLAR-KO-10.7b: 8 tokens
| Model | Tokens |
| --- | --- |
| SOLAR-10.7B | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '날', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '좋', '네', '요', '.']` |
| SOLAR-KO-10.7B | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` |
**Tokenizing "Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!"**
- SOLAR-10.7B: 22 tokens
- SOLAR-KO-10.7b: 22 tokens
| Model | Tokens |
| --- | --- |
| SOLAR-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
| SOLAR-KO-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
# LICENSE
Apache 2.0
# **Model Benchmark**
## LM Eval Harness - Korean (polyglot branch)
- Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
| | 0 | 5 | 10 | 50 |
|:---------------------------------|---------:|---------:|---------:|---------:|
| kobest_boolq (macro_f1) | 0.853949 | 0.88098 | 0.898139 | 0.902354 |
| kobest_copa (macro_f1) | 0.804531 | 0.826736 | 0.837656 | 0.860899 |
| kobest_hellaswag (macro_f1) | 0.507174 | 0.500983 | 0.487287 | 0.512182 |
| kobest_sentineg (macro_f1) | 0.3517 | 0.972291 | 0.977321 | 0.984884 |
| kohatespeech (macro_f1) | 0.258111 | 0.403957 | 0.386808 | 0.462393 |
| kohatespeech_apeach (macro_f1) | 0.337667 | 0.651697 | 0.705337 | 0.827757 |
| kohatespeech_gen_bias (macro_f1) | 0.124535 | 0.503464 | 0.498501 | 0.443218 |
| korunsmile (f1) | 0.3814 | 0.356939 | 0.369989 | 0.296193 |
| nsmc (acc) | 0.5356 | 0.87162 | 0.88654 | 0.89632 |
| pawsx_ko (acc) | 0.5435 | 0.5245 | 0.5315 | 0.5385 |
## Citation
```
@misc {solar_ko_junbum_2023,
author = { {L. Junbum} },
title = { Solar-Ko-10.7b },
year = 2024,
url = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
publisher = { Hugging Face }
}
```
## Acknowledgements
- Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program.
- The training corpus includes data from [AI Hub](https://www.aihub.or.kr/), [Modu Corpus](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
|
classicakeza5/Legal_act101
|
classicakeza5
| 2024-05-11T11:33:34Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"luke",
"text-classification",
"generated_from_trainer",
"base_model:studio-ousia/mluke-base",
"base_model:finetune:studio-ousia/mluke-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T10:04:11Z |
---
license: apache-2.0
base_model: studio-ousia/mluke-base
tags:
- generated_from_trainer
model-index:
- name: Legal_act101
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Legal_act101
This model is a fine-tuned version of [studio-ousia/mluke-base](https://huggingface.co/studio-ousia/mluke-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4319 | 1.0 | 443 | 0.2880 |
| 0.1956 | 2.0 | 886 | 0.2239 |
| 0.4354 | 3.0 | 1329 | 0.2060 |
| 0.5987 | 4.0 | 1772 | 0.1693 |
| 0.1867 | 5.0 | 2215 | 0.1674 |
| 0.0974 | 6.0 | 2658 | 0.1748 |
| 0.263 | 7.0 | 3101 | 0.1616 |
| 0.1949 | 8.0 | 3544 | 0.1512 |
| 0.0053 | 9.0 | 3987 | 0.1841 |
| 0.1426 | 10.0 | 4430 | 0.1734 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
digiplay/bluePencil_v09b
|
digiplay
| 2024-05-11T11:20:53Z | 406 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-20T09:49:45Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/79083?modelVersionId=89814
Original Author's DEMO image:

|
toure32/dqn-SpaceInvadersNoFrameskip-v4
|
toure32
| 2024-05-11T11:17:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-11T11:17:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 605.00 +/- 105.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga toure32 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga toure32 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga toure32
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
aliatik66/ISY503-sentiment_analysis2
|
aliatik66
| 2024-05-11T11:15:21Z | 110 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T11:09:09Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ISY503-sentiment_analysis2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ISY503-sentiment_analysis2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3514
- Accuracy: 0.8533
- F1: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
nicholasb00/llama2_results
|
nicholasb00
| 2024-05-11T11:05:51Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-05-11T11:05:29Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/Llama-2-7b-chat-hf
model-index:
- name: llama2_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/nicholas-bianchini-unipr/huggingface/runs/ix7o0kvf)
# llama2_results
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Xerror/Test
|
Xerror
| 2024-05-11T10:58:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T10:57:43Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: meta-llama/Meta-Llama-3-8B
---
# Uploaded model
- **Developed by:** Xerror
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Meta-Llama-3-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thevidoja/roberta-pddl-states
|
thevidoja
| 2024-05-11T10:58:23Z | 0 | 0 |
transformers
|
[
"transformers",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T10:46:50Z |
---
license: mit
library_name: transformers
---
|
CodeTriad/mistral-instruct-finetune_1500_r_32_alpha_64
|
CodeTriad
| 2024-05-11T10:53:14Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-11T10:53:04Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
abhimadd/crec
|
abhimadd
| 2024-05-11T10:51:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T10:49:15Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blockblockblock/Dark-Miqu-70B-bpw3.5-exl2
|
blockblockblock
| 2024-05-11T10:49:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.19522",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-05-11T10:46:14Z |
---
license: other
---

***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).*
A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere!
This model **excels** at writing Dark/Grimdark fantasy (see examples below).
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0).
This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted.
The model was created in two stages:
- First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results.
- In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
name: midnight-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: sophosympatheia/Midnight-Rose-70B-v2.0.3
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: euryale-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/Euryale-1.3-L2-70B
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: winter-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/WinterGoddess-1.4x-70B-L2
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: dark-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: midnight-miqu-70b
- model: euryale-miqu-70b
- model: winter-miqu-70b
base_model: 152334H/miqu-1-70b-sf
merge_method: model_stock
dtype: float16
```
## Key configuration details:
- '`merge_method: slerp`' uses spherical linear interpolation for merging models.
- '`parameters: t`' controls the interpolation ratios between models.
- '`embed_slerp: true`' applies slerp to the embedding layers.
- '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
**NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Dawn
The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks.
His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death.
Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces.
He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings.
Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close.
He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out.
He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live."
She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat."
She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun."
He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air.
"Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick."
She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend.
"I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!"
Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk."
He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape.
"Tell me," he said, his voice a whisper. "Tell me, and I'll end this."
She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve.
Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way.
He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head.
And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous.
Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him.
They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured.
In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain.
It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it.
He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive.
But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight.
He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost.
And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it.
It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows.
It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters.
It was a story that had only just begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat.
He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run.
Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life.
His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place.
As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start.
The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts.
He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced.
"Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away."
The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted.
He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets.
"What have I done?" he asked the mirror, but it gave no answer.
He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous.
He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his.
The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach.
But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time.
The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying.
He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair.
*Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?*
But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do.
He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart.
"Shh," he whispered, his voice a soothing balm. "It's over now."
The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here."
The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me."
As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name.
*Elara.*
The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end.
He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release.
As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield.
"Anyone here need my services?" he asked, his voice devoid of emotion.
One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it."
The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged.
*Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.*
He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance.
"No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly."
The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different.
"Why?" he asked, genuinely curious.
The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up."
The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on.
He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick."
The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening.
The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?*
But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole.
"What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?"
The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window.
"Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot.
The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered.
As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely?
He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies.
"You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice.
Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North."
The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom.
"What do you think it means?" he pressed, unable to keep the desperation at bay.
Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here."
The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine.
As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone.
"We have to get out of here," he croaked, his voice barely above a whisper.
Grimgor just grunted in agreement, his gaze never leaving the window.
The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment.
But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it.
"Grimgor," he hissed, tugging on the larger man's arm. "Look."
Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it.
The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe?
The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge.
"Who are you?" he demanded, trying to sound braver than he felt.
The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice.
"I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom."
With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell.
The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains.
"We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here."
The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown.
Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other.
"What now?" the Fox asked, his voice barely above a whisper.
Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine.
"Now," he said, "we find out what's really going on."
And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift
The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism.
In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford.
Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs.
She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel.
As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay.
Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills.
"Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now."
She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement.
"Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines."
Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper.
"They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart."
The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others.
"It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work."
Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!"
The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over?
As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them.
And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart.
As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive?
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes of Exodus
The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began.
As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago.
Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself.
But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth.
He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely.
Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online.
And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale.
His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps.
"What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear.
Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly.
Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened."
The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return.
As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived.
But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void.
He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next."
Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here."
Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft."
Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet.
As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened.
And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why?
The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought.
He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look."
The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be.
As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand.
But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one.
He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Biodome
The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet.
Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known.
"Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal."
He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland.
But today was not that day. Today, it was just another tomb waiting to be sealed.
As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked.
"What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?"
Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion.
"I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good."
Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead.
The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine.
"Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!"
Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake.
With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom.
"Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!"
But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness.
And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more.
When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones.
Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten.
Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard.
And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors.
But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile.
"Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity.
And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1 - Whispers from the Void
The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos.
His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge.
A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean.
The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake.
Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger.
He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now.
As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again.
But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act.
The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe.
As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch.
But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home.
The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything.
What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence.
Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind.
He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here.
Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash.
His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used.
The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations.
Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long?
He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes.
He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff.
Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news."
The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply.
"The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years."
Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?"
"We have enough for six, maybe seven if we ration carefully. But not twelve."
The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close.
"We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes."
Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival.
As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure.
In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now.
He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary.
Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking.
Samuel shook his head. "I don't know."
"We can't just let six of us die," she said, her voice rising in panic. "There has to be another way."
He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?"
Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up."
Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic."
She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic."
They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic."
Samuel raised an eyebrow. "What?"
"We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface."
He stared at her in disbelief. "You can't be serious."
"It's not ideal," she admitted. "But it's better than just letting them die down here."
Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have?
As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor.
What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls.
Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel.
And all he could do was watch as the world above claimed them, one by one.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune.
In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely.
His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything.
Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within.
Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner.
One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world.
"Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?"
The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion.
"No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me."
He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within.
"I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it."
The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box."
As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for.
The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness.
But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher.
Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones.
"I must see this box for myself," he said, his voice barely above a whisper. "Take me to it."
The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels.
There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity.
As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls.
A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe."
Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe.
"But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds."
And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean?
As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him!
He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety.
And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself.
But little did he know that the true mystery was only just beginning...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1
The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live.
The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light.
I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line."
I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them.
A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth.
I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness.
When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming."
And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step.
I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there...
But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction.
"Halt!" one of them yelled, his voice harsh and commanding. "Hands up!"
I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him.
"Run," he whispered, his voice barely audible. "Leave me."
I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything.
But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat.
"Nein," I whispered, my voice choked with grief and rage. "Nein!"
I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us.
With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness.
As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me.
But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel.
His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work.
A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips.
The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface.
Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival.
"You're late," Chigurh said finally, his voice low and gravelly.
The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic."
Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement.
Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket.
"You know what needs to be done," the man in the suit said, his voice barely above a whisper.
Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress.
Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary.
He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose.
Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die.
He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction.
As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move.
Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw.
</details>
Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
|
jegilj/Practica3
|
jegilj
| 2024-05-11T10:48:56Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-05-11T10:05:37Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
yongyi169/yy-chat-ar-20240511
|
yongyi169
| 2024-05-11T10:45:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T10:33:41Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hashmalmellow/Test1
|
hashmalmellow
| 2024-05-11T10:40:03Z | 0 | 0 | null |
[
"en",
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-11T10:27:12Z |
---
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
This model is for Obtaining Internship Opportunities Aligned with Student Skills
### Model Description
SkillMatch is a machine learning model designed to match students with internship opportunities based on their skills, experiences, and qualifications. The model takes as input the resumes of students and the job descriptions of internship listings and outputs a measure of similarity or relevance between the two.
<!-- Provide a longer summary of what this model is. -->
Sure, here's a description of a model used for Obtaining Internship Opportunities Aligned with Student Skills:
**Model Name:** SkillMatch
**Description:**
SkillMatch is a machine learning model designed to match students with internship opportunities based on their skills, experiences, and qualifications. The model takes as input the resumes of students and the job descriptions of internship listings and outputs a measure of similarity or relevance between the two.
**Model Architecture:**
SkillMatch utilizes a hybrid approach combining natural language processing (NLP) techniques with machine learning algorithms. The model consists of the following components:
1. **Text Embedding Layer:** Both student resumes and internship job descriptions are processed through a text embedding layer to convert the textual data into numerical representations. This layer may use techniques like word embeddings (e.g., Word2Vec, GloVe) or contextual embeddings (e.g., BERT, GPT) to capture the semantic meaning of words and phrases.
2. **Feature Extraction:** The numerical representations obtained from the text embedding layer are then fed into a feature extraction module. This module extracts relevant features such as skills, experiences, education, and qualifications from the textual data.
3. **Matching Algorithm:** A matching algorithm compares the extracted features from student resumes with those from internship job descriptions to determine the similarity or relevance between the two. Various similarity metrics can be used for this purpose, including cosine similarity, Jaccard similarity, or neural network-based similarity measures.
4. **Ranking and Filtering:** The model ranks the internship opportunities based on their similarity scores with each student's profile. Additionally, it may apply filters based on criteria such as location, industry, duration, and required qualifications to narrow down the list of recommended opportunities.
**Training Data:**
SkillMatch is trained on a large dataset consisting of labeled pairs of student resumes and internship job descriptions. The dataset is manually curated and annotated to indicate which internship opportunities are suitable for each student based on their skills and qualifications.
**Evaluation Metrics:**
The performance of SkillMatch is evaluated using metrics such as precision, recall, and F1-score, which measure the model's ability to accurately recommend relevant internship opportunities to students. Additionally, user feedback and satisfaction surveys may be collected to assess the model's effectiveness in practice.
**Deployment:**
Once trained and evaluated, SkillMatch can be deployed as part of a web application or API where students can upload their resumes and receive personalized recommendations for internship opportunities aligned with their skills and interests. The model can be updated periodically with new data to improve its performance over time.
- **Developed by:** [NsZ]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abc88767/22c2
|
abc88767
| 2024-05-11T10:39:32Z | 121 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T10:38:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/dolphin-ultrafeedback-dpo-GGUF
|
mradermacher
| 2024-05-11T10:39:12Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ibivibiv/dolphin-ultrafeedback-dpo",
"base_model:quantized:ibivibiv/dolphin-ultrafeedback-dpo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-11T08:38:18Z |
---
base_model: ibivibiv/dolphin-ultrafeedback-dpo
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/dolphin-ultrafeedback-dpo
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/dolphin-ultrafeedback-dpo-GGUF/resolve/main/dolphin-ultrafeedback-dpo.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
haninsuleiman/NABDFinalModel
|
haninsuleiman
| 2024-05-11T10:38:48Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"en",
"license:mit",
"region:us"
] | null | 2024-05-11T10:32:22Z |
---
license: mit
language:
- en
library_name: adapter-transformers
tags:
- code
---
|
minnehwg/inetuned-summarize-test
|
minnehwg
| 2024-05-11T10:29:05Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-10T17:04:53Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: inetuned-summarize-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# inetuned-summarize-test
This model is a fine-tuned version of [minnehwg/inetuned-summarize-test](https://huggingface.co/minnehwg/inetuned-summarize-test) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8961
- Rouge1: 34.2527
- Rouge2: 14.2278
- Rougel: 24.7229
- Rougelsum: 25.3930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| No log | 1.0 | 250 | 0.9249 | 34.2996 | 13.7194 | 24.5150 | 25.1146 |
| 0.6471 | 2.0 | 500 | 0.9104 | 34.1716 | 14.0356 | 24.6683 | 25.1958 |
| 0.6471 | 3.0 | 750 | 0.8961 | 34.2527 | 14.2278 | 24.7229 | 25.3930 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
RavSmarty/mistral-finetuned-samsum
|
RavSmarty
| 2024-05-11T10:25:59Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T09:52:04Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ayushgs/llama-3-8b-quickreel
|
ayushgs
| 2024-05-11T10:20:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T10:20:10Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** ayushgs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-8bits
|
RichardErkhov
| 2024-05-11T10:05:14Z | 77 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.17733",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-11T09:13:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TowerBase-7B-v0.1 - bnb 8bits
- Model creator: https://huggingface.co/Unbabel/
- Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
Original model description:
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
model-index:
- name: TowerBase-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
---
# Model Card for TowerBase-7B-v0.1
## Model Details
### Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
beimu/QA-test-llama3-7b
|
beimu
| 2024-05-11T10:04:46Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T09:34:07Z |
---
license: apache-2.0
---
|
bartowski/dolphin-2.9-llama3-70b-GGUF
|
bartowski
| 2024-05-11T10:04:44Z | 132 | 1 | null |
[
"gguf",
"text-generation",
"en",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:abacusai/SystemChat-1.1",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:llama3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2024-05-11T07:08:48Z |
---
license: llama3
language:
- en
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- HuggingFaceH4/ultrachat_200k
- microsoft/orca-math-word-problems-200k
- abacusai/SystemChat-1.1
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of dolphin-2.9-llama3-70b
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2828">b2828</a> for quantization.
Original model: https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-70b
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [dolphin-2.9-llama3-70b-Q8_0.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/tree/main/dolphin-2.9-llama3-70b-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
| [dolphin-2.9-llama3-70b-Q6_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/tree/main/dolphin-2.9-llama3-70b-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
| [dolphin-2.9-llama3-70b-Q5_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-70b-Q5_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. |
| [dolphin-2.9-llama3-70b-Q4_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [dolphin-2.9-llama3-70b-Q4_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. |
| [dolphin-2.9-llama3-70b-IQ4_NL.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [dolphin-2.9-llama3-70b-IQ4_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [dolphin-2.9-llama3-70b-Q3_K_L.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. |
| [dolphin-2.9-llama3-70b-Q3_K_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
| [dolphin-2.9-llama3-70b-IQ3_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [dolphin-2.9-llama3-70b-IQ3_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [dolphin-2.9-llama3-70b-Q3_K_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
| [dolphin-2.9-llama3-70b-IQ3_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [dolphin-2.9-llama3-70b-IQ3_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [dolphin-2.9-llama3-70b-Q2_K.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
| [dolphin-2.9-llama3-70b-IQ2_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [dolphin-2.9-llama3-70b-IQ2_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ2_XS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ2_XXS.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
| [dolphin-2.9-llama3-70b-IQ1_M.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
| [dolphin-2.9-llama3-70b-IQ1_S.gguf](https://huggingface.co/bartowski/dolphin-2.9-llama3-70b-GGUF/blob/main/dolphin-2.9-llama3-70b-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/dolphin-2.9-llama3-70b-GGUF --include "dolphin-2.9-llama3-70b-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/dolphin-2.9-llama3-70b-GGUF --include "dolphin-2.9-llama3-70b-Q8_0.gguf/*" --local-dir dolphin-2.9-llama3-70b-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (dolphin-2.9-llama3-70b-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
yifanxie/noisy-partridge-1
|
yifanxie
| 2024-05-11T10:04:23Z | 138 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-05-11T10:02:33Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [google/gemma-2b](https://huggingface.co/google/gemma-2b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="yifanxie/noisy-partridge-1",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<eos><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"yifanxie/noisy-partridge-1",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"yifanxie/noisy-partridge-1",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 256
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yifanxie/noisy-partridge-1" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<eos><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 256
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GemmaForCausalLM(
(model): GemmaModel(
(embed_tokens): Embedding(256000, 2048, padding_idx=0)
(layers): ModuleList(
(0-17): 18 x GemmaDecoderLayer(
(self_attn): GemmaSdpaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=256, bias=False)
(v_proj): Linear(in_features=2048, out_features=256, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): GemmaRotaryEmbedding()
)
(mlp): GemmaMLP(
(gate_proj): Linear(in_features=2048, out_features=16384, bias=False)
(up_proj): Linear(in_features=2048, out_features=16384, bias=False)
(down_proj): Linear(in_features=16384, out_features=2048, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): GemmaRMSNorm()
(post_attention_layernorm): GemmaRMSNorm()
)
)
(norm): GemmaRMSNorm()
)
(lm_head): Linear(in_features=2048, out_features=256000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Retr0REG/Whats-up-gguf
|
Retr0REG
| 2024-05-11T10:02:04Z | 109 | 2 |
transformers
|
[
"transformers",
"gguf",
"llama",
"llama-2",
"text-generation",
"code",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-05-02T11:34:19Z |
---
language:
- code
license: llama2
tags:
- llama-2
inference: true
model_creator: Meta
model_type: llama
pipeline_tag: text-generation
---
PoC Model For `llama-cpp-python` Metadata `Server-Side Template Injection` -> `RCE`
+ Writeup: https://0reg.dev/blog/from-gguf-model-format-metadata-rce-to-state-of-the-art-nlp-project-rces
+ GitHub Advisory: https://github.com/abetlen/llama-cpp-python/security/advisories/GHSA-56xg-wfcc-g829
|
Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_4_hiru_structured_first3
|
Pubudu
| 2024-05-11T10:01:04Z | 4 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"mbart",
"adapterhub:summarization/hiru_structured_5100_first3",
"dataset:hiru_structutred_5100_first3",
"region:us"
] | null | 2024-05-11T08:03:26Z |
---
tags:
- mbart
- adapterhub:summarization/hiru_structured_5100_first3
- adapter-transformers
datasets:
- hiru_structutred_5100_first3
---
# Adapter `Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_4_hiru_structured_first3` for facebook/mbart-large-cc25
An [adapter](https://adapterhub.ml) for the `facebook/mbart-large-cc25` model that was trained on the [summarization/hiru_structured_5100_first3](https://adapterhub.ml/explore/summarization/hiru_structured_5100_first3/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("facebook/mbart-large-cc25")
adapter_name = model.load_adapter("Pubudu/mbart-large-cc25_prefix_tuning_12_par_bn_rf_4_hiru_structured_first3", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
WhiteGiverPlus/mistral-deepseek-chat7b
|
WhiteGiverPlus
| 2024-05-11T10:00:38Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T09:54:20Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pubudu/test
|
Pubudu
| 2024-05-11T09:57:16Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:facebook/mbart-large-cc25",
"base_model:finetune:facebook/mbart-large-cc25",
"region:us"
] | null | 2024-04-23T05:21:27Z |
---
base_model: facebook/mbart-large-cc25
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1215
- Gen Len: 17.3533
- Rouge-1: 39.1861
- Rouge-2: 22.0975
- Rouge-l: 38.4014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 1000
- num_epochs: 50
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Rouge-1 | Rouge-2 | Rouge-l |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|
| No log | 1.0 | 642 | 3.3523 | 22.5622 | 28.8044 | 14.6813 | 28.1959 |
| No log | 2.0 | 1284 | 2.9887 | 22.7422 | 36.6353 | 19.681 | 35.9403 |
| No log | 3.0 | 1926 | 2.9367 | 20.0578 | 38.6433 | 21.0943 | 37.9327 |
| No log | 4.0 | 2568 | 2.9503 | 18.5644 | 38.6509 | 21.3031 | 37.8452 |
| No log | 5.0 | 3210 | 2.9366 | 17.1689 | 38.8973 | 21.9518 | 38.3012 |
| No log | 6.0 | 3852 | 2.9782 | 19.2489 | 39.5578 | 22.3324 | 38.9385 |
| No log | 7.0 | 4494 | 3.0080 | 19.0422 | 38.1388 | 21.5059 | 37.4054 |
| 2.8286 | 8.0 | 5136 | 3.0908 | 18.4667 | 38.7921 | 21.3614 | 38.0183 |
| 2.8286 | 9.0 | 5778 | 3.1191 | 18.2978 | 39.3199 | 22.3807 | 38.6943 |
| 2.8286 | 10.0 | 6420 | 3.1215 | 17.3533 | 39.1861 | 22.0975 | 38.4014 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
nickrwu/test-roberta-finetuned-mathqa
|
nickrwu
| 2024-05-11T09:56:34Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:LIAMF-USP/roberta-large-finetuned-race",
"base_model:finetune:LIAMF-USP/roberta-large-finetuned-race",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-05-11T09:56:01Z |
---
license: mit
base_model: LIAMF-USP/roberta-large-finetuned-race
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: test-roberta-finetuned-mathqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-roberta-finetuned-mathqa
This model is a fine-tuned version of [LIAMF-USP/roberta-large-finetuned-race](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
- Accuracy: 0.2007
- F1: 0.1089
- Precision: 0.1782
- Recall: 0.1954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6207 | 1.0 | 2970 | 1.6094 | 0.2064 | 0.0714 | 0.1694 | 0.2010 |
| 1.6136 | 2.0 | 5940 | 1.6094 | 0.2064 | 0.0951 | 0.1934 | 0.2020 |
| 1.6161 | 3.0 | 8910 | 1.6094 | 0.2007 | 0.1089 | 0.1782 | 0.1954 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mmnga/japanese-stablelm-2-instruct-1_6b-gguf
|
mmnga
| 2024-05-11T09:56:19Z | 1,091 | 2 | null |
[
"gguf",
"japanese-stablelm",
"causal-lm",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-05-11T07:26:43Z |
---
extra_gated_prompt: >-
By clicking "Agree", you agree to the [License Agreement](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt) and acknowledge Stability AI's [Privacy Policy](https://stability.ai/privacy-policy).
license:
- other
language:
- en
- ja
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
tags:
- japanese-stablelm
- causal-lm
---
# japanese-stablelm-2-instruct-1_6b-gguf
[stabilityaiさんが公開しているjapanese-stablelm-2-instruct-1_6b](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## license
ご使用の前に利用規約を必ずご確認の上、同意した場合のみご利用になれます。
ご使用される際は[利用規約](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt)に同意したものとみなされます。
Please be sure to read this before using it, and you can use it only if you agree to it.
By using this service, you are deemed to have agreed to the [terms of use](https://huggingface.co/stabilityai/japanese-stablelm-2-instruct-1_6b/blob/main/LICENSE.txt).
また、商用利用の際には、[メンバーシップ](https://stability.ai/membership)への登録が必要です。
Please note: For commercial use, please refer to https://stability.ai/membership
## convert
元モデルをダウンロード後、tokenization_arcade100k.pyを修正する必要があります。
修正箇所は、def __init__の最後に下記を追加します
```
self.special_tokens = self.tokenizer._special_tokens
```
変換用スクリプトは[こちら](https://gist.github.com/mmnga/bd9de075fcbdf1f95587edeb35565419)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'japanese-stablelm-2-instruct-1_6b-Q4_0.gguf' -n 128 -p 'こんにちわ'
```
|
mnoukhov/pythia410m-dpo-tldr-lr1e-5
|
mnoukhov
| 2024-05-11T09:55:58Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mnoukhov/pythia410m-sft-tldr",
"base_model:adapter:mnoukhov/pythia410m-sft-tldr",
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T05:47:20Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mnoukhov/pythia410m-sft-tldr
model-index:
- name: pythia410m-dpo-tldr-lr1e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia410m-dpo-tldr-lr1e-5
This model is a fine-tuned version of [mnoukhov/pythia410m-sft-tldr](https://huggingface.co/mnoukhov/pythia410m-sft-tldr) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5595
- Rewards/chosen: -0.9059
- Rewards/rejected: -1.3735
- Rewards/accuracies: 0.7113
- Rewards/margins: 0.4677
- Logps/rejected: -88.3830
- Logps/chosen: -88.3830
- Logps/ref Rejected: -63.5119
- Logps/ref Chosen: -70.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logps/ref Rejected | Logps/ref Chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:------------------:|:----------------:|
| 0.6295 | 0.2 | 291 | 0.5864 | -0.5101 | -0.8319 | 0.7039 | 0.3218 | -80.4685 | -80.4685 | -63.5119 | -70.2656 |
| 0.5926 | 0.4 | 582 | 0.5600 | -0.9009 | -1.3738 | 0.7120 | 0.4728 | -88.2839 | -88.2839 | -63.5119 | -70.2656 |
| 0.5761 | 0.6 | 873 | 0.5585 | -0.9509 | -1.4326 | 0.7110 | 0.4817 | -89.2846 | -89.2846 | -63.5119 | -70.2656 |
| 0.5678 | 0.8 | 1164 | 0.5595 | -0.9059 | -1.3735 | 0.7113 | 0.4677 | -88.3830 | -88.3830 | -63.5119 | -70.2656 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
RichardErkhov/georgesung_-_llama2_7b_chat_uncensored-8bits
|
RichardErkhov
| 2024-05-11T09:53:40Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-11T09:44:12Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama2_7b_chat_uncensored - bnb 8bits
- Model creator: https://huggingface.co/georgesung/
- Original model: https://huggingface.co/georgesung/llama2_7b_chat_uncensored/
Original model description:
---
license: other
datasets:
- georgesung/wizard_vicuna_70k_unfiltered
---
# Overview
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset (originally from [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)).
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
The version here is the fp16 HuggingFace model.
## GGML & GPTQ versions
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ
# Prompt style
The model was trained with the following prompt style:
```
### HUMAN:
Hello
### RESPONSE:
Hi, how are you?
### HUMAN:
I'm fine.
### RESPONSE:
How can I help you?
...
```
# Training code
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
To reproduce the results:
```
git clone https://github.com/georgesung/llm_qlora
cd llm_qlora
pip install -r requirements.txt
python train.py configs/llama2_7b_chat_uncensored.yaml
```
# Fine-tuning guide
https://georgesung.github.io/ai/qlora-ift/
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_georgesung__llama2_7b_chat_uncensored)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 43.39 |
| ARC (25-shot) | 53.58 |
| HellaSwag (10-shot) | 78.66 |
| MMLU (5-shot) | 44.49 |
| TruthfulQA (0-shot) | 41.34 |
| Winogrande (5-shot) | 74.11 |
| GSM8K (5-shot) | 5.84 |
| DROP (3-shot) | 5.69 |
|
arwisyah/fine-tuned-cardiffnlp-twitter-roberta-base-sentiment-finance-dataset
|
arwisyah
| 2024-05-11T09:48:15Z | 111 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"finance",
"en",
"dataset:CJCJ3030/twitter-financial-news-sentiment",
"base_model:cardiffnlp/twitter-roberta-base-sentiment",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-10T21:25:43Z |
---
tags:
- generated_from_trainer
- finance
base_model: cardiffnlp/twitter-roberta-base-sentiment
metrics:
- accuracy
model-index:
- name: fine-tuned-cardiffnlp-twitter-roberta-base-sentiment-finance-dataset
results: []
datasets:
- CJCJ3030/twitter-financial-news-sentiment
language:
- en
library_name: transformers
pipeline_tag: text-classification
widget:
- text: "UK house sales up 12% in April"
- text: "Singapore oil trader convicted of abetting forgery and cheating HSBC"
- text: "‘There’s money everywhere’: Milken conference-goers look for a dealmaking revival"
- text: "ETF buying nearly halves in April as US rate cut hopes recede"
- text: "Todd Boehly’s investment house in advanced talks to buy private credit firm"
- text: "Berkshire Hathaway’s cash pile hits new record as Buffett dumps stocks"
- text: "Harvest partnership to bring HK-listed crypto ETFs to Singapore"
- text: "Kazakh oligarch Timur Kulibayev sells Mayfair mansion for £35mn"
- text: "Deutsche Bank’s DWS inflated client asset inflows by billions of euro"
- text: "UBS reports stronger than expected profit in first quarter"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-cardiffnlp-twitter-roberta-base-sentiment-finance-dataset
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an twitter finance news sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3123
- Accuracy: 0.8559
10 examples in Inference API are gathered from https://twitter.com/ftfinancenews in early may 2024
Colab Notebook for fine tuning : https://colab.research.google.com/drive/1gvpFbazlxg3AdSldH3w6TYjGUByxqCrh?usp=sharing
### Training Data
https://huggingface.co/datasets/CJCJ3030/twitter-financial-news-sentiment/viewer/default/train
### Evaluation Data
https://huggingface.co/datasets/CJCJ3030/twitter-financial-news-sentiment/viewer/default/validation
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 120
- eval_batch_size: 120
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Validation Loss | Accuracy |
|:-----:|:----:|:---------------:|:--------:|
| 1.0 | 80 | 0.3123 | 0.8559 |
| 2.0 | 160 | 0.3200 | 0.8576 |
| 3.0 | 240 | 0.3538 | 0.8819 |
| 4.0 | 320 | 0.3695 | 0.8882 |
| 5.0 | 400 | 0.4108 | 0.8869 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
## Citation
```bibtex
@inproceedings{barbieri-etal-2020-tweeteval,
title = "{T}weet{E}val: Unified Benchmark and Comparative Evaluation for Tweet Classification",
author = "Barbieri, Francesco and
Camacho-Collados, Jose and
Espinosa Anke, Luis and
Neves, Leonardo",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.148",
doi = "10.18653/v1/2020.findings-emnlp.148",
pages = "1644--1650"
}
```
|
tomaszki/stablelm-56
|
tomaszki
| 2024-05-11T09:32:12Z | 123 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T09:30:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Pelochus/llama2-chat-13b-hf-rk3588
|
Pelochus
| 2024-05-11T09:31:46Z | 0 | 1 | null |
[
"llama2",
"llama2-13b",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-04-14T08:18:51Z |
---
tags:
- llama2
- llama2-13b
- rkllm
- rockchip
- rk3588
---
# Llama 2 Chat 13B for RK3588
This is a conversion from https://huggingface.co/meta-llama/Llama-2-13b-chat-hf to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588.
**Latest update:** April 2024. Converted with **RKLLM runtime 1.0.0**. Not compatible with newer versions
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM:
https://huggingface.co/meta-llama/Llama-2-13b-chat-hf/blob/main/LICENSE.txt
|
Pelochus/llama2-chat-7b-hf-rk3588
|
Pelochus
| 2024-05-11T09:31:08Z | 0 | 0 | null |
[
"llama2",
"llama2-7b",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-04-13T15:28:36Z |
---
tags:
- llama2
- llama2-7b
- rkllm
- rockchip
- rk3588
---
# Llama 2 Chat 7B for RK3588
This is a conversion from https://huggingface.co/meta-llama/Llama-2-7b-chat-hf to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588.
**Latest update:** April 2024. Converted with **RKLLM runtime 1.0.0**. Not compatible with newer versions
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM:
https://huggingface.co/meta-llama/Llama-2-7b-chat-hf/blob/main/LICENSE.txt
|
Pelochus/qwen-1_8B-rk3588
|
Pelochus
| 2024-05-11T09:29:25Z | 0 | 0 | null |
[
"qwen",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-04-11T16:18:10Z |
---
tags:
- qwen
- rkllm
- rockchip
- rk3588
---
# Qwen Chat 1.8B for RK3588
This is a conversion from https://huggingface.co/Qwen/Qwen-1_8B/ to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588
**Latest update:** April 2024. Converted with **RKLLM runtime 1.0.0**.
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM https://huggingface.co/Qwen/Qwen-1_8B/
|
Pelochus/phi-3-mini-rk3588
|
Pelochus
| 2024-05-11T09:28:25Z | 0 | 2 | null |
[
"phi3",
"phi3-mini",
"rkllm",
"rockchip",
"rk3588",
"region:us"
] | null | 2024-05-09T17:01:14Z |
---
tags:
- phi3
- phi3-mini
- rkllm
- rockchip
- rk3588
---
# Microsoft's Phi-3 mini (4K instruct) for RK3588
This is a conversion from https://huggingface.co/microsoft/phi-3-mini-4k-instruct to the RKLLM format for Rockchip devices.
This runs on the NPU from the RK3588.
**Latest update:** 9th May 2024. Converted with **RKLLM runtime 1.0.1**.
# Main repo
See this for my full collection of converted LLMs for the RK3588's NPU:
https://huggingface.co/Pelochus/ezrkllm-collection
# License
Same as the original LLM, in this case MIT:
https://huggingface.co/microsoft/Phi-3-mini-4k-instruct
|
JasonIsHere/gemma-2b-it-nextjs
|
JasonIsHere
| 2024-05-11T09:23:52Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-11T08:44:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf
|
RichardErkhov
| 2024-05-11T09:21:30Z | 29 | 0 | null |
[
"gguf",
"arxiv:2401.01335",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-11T07:21:05Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
zephyr-7b-sft-full-SPIN-iter2 - GGUF
- Model creator: https://huggingface.co/UCLA-AGI/
- Original model: https://huggingface.co/UCLA-AGI/zephyr-7b-sft-full-SPIN-iter2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [zephyr-7b-sft-full-SPIN-iter2.Q2_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q2_K.gguf) | Q2_K | 2.53GB |
| [zephyr-7b-sft-full-SPIN-iter2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [zephyr-7b-sft-full-SPIN-iter2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [zephyr-7b-sft-full-SPIN-iter2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q3_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q3_K.gguf) | Q3_K | 3.28GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [zephyr-7b-sft-full-SPIN-iter2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q4_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [zephyr-7b-sft-full-SPIN-iter2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q4_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q4_K.gguf) | Q4_K | 4.07GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q4_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q5_0.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q5_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q5_K.gguf) | Q5_K | 4.78GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q5_1.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [zephyr-7b-sft-full-SPIN-iter2.Q6_K.gguf](https://huggingface.co/RichardErkhov/UCLA-AGI_-_zephyr-7b-sft-full-SPIN-iter2-gguf/blob/main/zephyr-7b-sft-full-SPIN-iter2.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: mit
datasets:
- UCLA-AGI/SPIN_iter2
language:
- en
pipeline_tag: text-generation
---
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
# zephyr-7b-sft-full-spin-iter2
This model is a self-play fine-tuned model at iteration 2 from [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) using synthetic data based on on the [HuggingFaceH4/ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset.
## Model Details
### Model Description
- Model type: A 7B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: alignment-handbook/zephyr-7b-sft-full (based on mistralai/Mistral-7B-v0.1)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_UCLA-AGI__test-test)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 63.54 |
| ARC (25-shot) | 66.47 |
| HellaSwag (10-shot) | 85.82 |
| MMLU (5-shot) | 61.48 |
| TruthfulQA (0-shot) | 57.75 |
| Winogrande (5-shot) | 76.95 |
| GSM8K (5-shot) | 32.75 |
## Citation
```
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
CodeTriad/mistral-instruct-finetune_1500_r_128_alpha_32
|
CodeTriad
| 2024-05-11T09:20:28Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-11T09:19:57Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
philgrey/question_classifier
|
philgrey
| 2024-05-11T09:16:34Z | 112 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-03T17:29:08Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: question_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question_classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0991
- Accuracy: 0.9881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 16 | 0.1252 | 0.9524 |
| No log | 2.0 | 32 | 0.0991 | 0.9881 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.1.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
omaryshchenko/w2v-bert-2.0-english-CV16.1-v1-onnx
|
omaryshchenko
| 2024-05-11T09:14:57Z | 14 | 0 |
transformers
|
[
"transformers",
"onnx",
"wav2vec2-bert",
"automatic-speech-recognition",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-05-11T08:54:28Z |
---
license: cc-by-nc-4.0
---
|
luis56125/resnet18_segmentacion
|
luis56125
| 2024-05-11T09:14:37Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2024-05-11T09:14:31Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-4bits
|
RichardErkhov
| 2024-05-11T09:12:31Z | 79 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.17733",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-05-11T08:38:19Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TowerBase-7B-v0.1 - bnb 4bits
- Model creator: https://huggingface.co/Unbabel/
- Original model: https://huggingface.co/Unbabel/TowerBase-7B-v0.1/
Original model description:
---
license: cc-by-nc-4.0
language:
- en
- de
- fr
- zh
- pt
- nl
- ru
- ko
- it
- es
metrics:
- comet
pipeline_tag: translation
model-index:
- name: TowerBase-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 51.02
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.48
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.29
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 13.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Unbabel/TowerBase-7B-v0.1
name: Open LLM Leaderboard
---
# Model Card for TowerBase-7B-v0.1
## Model Details
### Model Description
TowerBase-7B is a language model that results from continuing the pretraining of Llama 2 on a mix of 20 billion tokens of monolingual data in ten different languages — English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian — and bilingual data. TowerBase-7B-v0.1 is the first model in the series.
The resulting model shows improved performance on the supported languages, while maintaining Llama 2's capabilities on English. It is particularly well-suited for fine-tuning on translation and related tasks: check out [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1).
We will release more details in the upcoming technical report.
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
- **Model type:** A 7B parameter model built on top of Llama 2 by continuing pretraining on multilingual data.
- **Language(s) (NLP):** English, Portuguese, Spanish, French, German, Dutch, Italian, Korean, Chinese, Russian
- **License:** CC-BY-NC-4.0, Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
## Intended uses & limitations
The model is intended for research purposes in the 10 languages it supports.
The model is able to perform well on translation and related tasks (e.g., APE, GEC) on a few-shot regime.
It can also be fine-tuned to perform these tasks in a zero-shot fashion (see [TowerInstruct](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1), as well as other multilingual tasks.
### Out-of-Scope Use
The model is not guaranteed to perform well for languages other than the 10 languages it supports.
## Bias, Risks, and Limitations
TowerBase-v0.1 has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Run the model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Unbabel/TowerBase-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "English: My name is TowerBase.\nPortuguese:"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### Training Data
Filtered versions of [mc4](https://huggingface.co/datasets/mc4) and bilingual data from various sources (e.g., [OPUS](https://opus.nlpl.eu/)).
## Citation
```bibtex
@misc{tower_llm_2024,
title={Tower: An Open Multilingual Large Language Model for Translation-Related Tasks},
author={Duarte M. Alves and José Pombal and Nuno M. Guerreiro and Pedro H. Martins and João Alves and Amin Farajian and Ben Peters and Ricardo Rei and Patrick Fernandes and Sweta Agrawal and Pierre Colombo and José G. C. de Souza and André F. T. Martins},
year={2024},
eprint={2402.17733},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
JordanYussac/t5-fine-tune-save-example
|
JordanYussac
| 2024-05-11T09:11:53Z | 108 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-05-11T09:11:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eehsan/gpt2-imdb-pos-v2
|
eehsan
| 2024-05-11T09:08:10Z | 137 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-07T14:03:38Z |
---
library_name: transformers
tags: []
---
# Inference API for the model traiend in PPO-GPT2-Sentiment Tutorial - FIT5217
|
SachinGenAIMaster/mistral-finetuned-samsum
|
SachinGenAIMaster
| 2024-05-11T09:05:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T06:52:20Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mserloth/autotrain-df-1000
|
mserloth
| 2024-05-11T09:01:36Z | 108 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-df-1000/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T09:00:31Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "I love AutoTrain"
datasets:
- autotrain-df-1000/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.7126067876815796
f1_macro: 0.4477682811016144
f1_micro: 0.685
f1_weighted: 0.6707122507122506
precision_macro: 0.47539682539682543
precision_micro: 0.685
precision_weighted: 0.7070714285714286
recall_macro: 0.45445658036677455
recall_micro: 0.685
recall_weighted: 0.685
accuracy: 0.685
|
RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf
|
RichardErkhov
| 2024-05-11T08:57:27Z | 8 | 0 | null |
[
"gguf",
"arxiv:2307.09288",
"endpoints_compatible",
"region:us"
] | null | 2024-05-11T05:02:43Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
speechless-llama2-13b - GGUF
- Model creator: https://huggingface.co/uukuguy/
- Original model: https://huggingface.co/uukuguy/speechless-llama2-13b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [speechless-llama2-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q2_K.gguf) | Q2_K | 4.52GB |
| [speechless-llama2-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [speechless-llama2-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [speechless-llama2-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [speechless-llama2-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [speechless-llama2-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q3_K.gguf) | Q3_K | 5.9GB |
| [speechless-llama2-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [speechless-llama2-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [speechless-llama2-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [speechless-llama2-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
| [speechless-llama2-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [speechless-llama2-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [speechless-llama2-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q4_K.gguf) | Q4_K | 7.33GB |
| [speechless-llama2-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [speechless-llama2-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
| [speechless-llama2-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
| [speechless-llama2-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [speechless-llama2-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q5_K.gguf) | Q5_K | 8.6GB |
| [speechless-llama2-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [speechless-llama2-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
| [speechless-llama2-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/uukuguy_-_speechless-llama2-13b-gguf/blob/main/speechless-llama2-13b.Q6_K.gguf) | Q6_K | 9.95GB |
Original model description:
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
license: llama2
datasets:
- Open-Orca/OpenOrca
- garage-bAInd/Open-Platypus
- WizardLM/WizardLM_evol_instruct_V2_196k
library_name: transformers
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
<p><h1> speechless-llama2-13b:v1.1 </h1></p>
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGUF)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Speechless-Llama2-13B-GGML)
Code: https://github.com/uukuguy/speechless
speechless-llama2-13b:v1.1 is a merge of Open-Orca/OpenOrca-Platypus2-13B and WizardLM/WizardLM-13B-V1.2.
| Metric | Value |
| --- | --- |
| ARC | 62.03 |
| HellaSwag | 81.85 |
| MMLU | 58.52 |
| TruthfulQA | 55.7 |
| Average | 64.52 |
## How to Prompt the Model
This model accepts the Alpaca instruction format.
For example:
```
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
```
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
**Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-13b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 51.67 |
| ARC (25-shot) | 62.03 |
| HellaSwag (10-shot) | 81.85 |
| MMLU (5-shot) | 58.52 |
| TruthfulQA (0-shot) | 55.7 |
| Winogrande (5-shot) | 76.56 |
| GSM8K (5-shot) | 13.95 |
| DROP (3-shot) | 13.12 |
|
abc88767/2c1
|
abc88767
| 2024-05-11T08:54:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-09T23:16:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CodeTriad/mistral-instruct-finetune_1500_r_64_alpha_32
|
CodeTriad
| 2024-05-11T08:49:45Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-05-11T08:49:26Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
omaryshchenko/w2v-bert-2.0-polish-CV16.0-v3-onnx
|
omaryshchenko
| 2024-05-11T08:48:50Z | 6 | 0 |
transformers
|
[
"transformers",
"onnx",
"wav2vec2-bert",
"automatic-speech-recognition",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-11T17:41:42Z |
---
license: cc-by-nc-sa-4.0
---
|
marsfu2009/XXMeagStickerYY
|
marsfu2009
| 2024-05-11T08:47:56Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-05-11T08:17:59Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of XXMeagStickerYY car
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - marsfu2009/XXMeagStickerYY
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of XXMeagStickerYY car using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
blockblockblock/Dark-Miqu-70B-bpw3-exl2
|
blockblockblock
| 2024-05-11T08:41:56Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2403.19522",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-05-11T08:38:29Z |
---
license: other
---

***NOTE***: *For a full range of GGUF quants kindly provided by @mradermacher: [Static](https://huggingface.co/mradermacher/Dark-Miqu-70B-GGUF) and [IMatrix](https://huggingface.co/mradermacher/Dark-Miqu-70B-i1-GGUF).*
A "dark" creative writing model with 32k context. Based off [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) but with greatly reduced "positivity" and "-isms". If you want happy endings, look elsewhere!
This model **excels** at writing Dark/Grimdark fantasy (see examples below).
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) and based on @sophosympatheia's template for [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0).
This model has a lower perplexity compared to [Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) (`'4.08 +/- 0.02'` vs `'4.02 +/- 0.02'`). It also generates longer responses when prompted.
The model was created in two stages:
- First, three "Midnight-Miqu-esque" models were produced using spherical interpolation (slerp) merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and each of the following models: [Midnight-Rose-70B-v2.0.3](https://huggingface.co/sophosympatheia/Midnight-Rose-70B-v2.0.3), [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). These models were selected for their dark, imaginative writing styles. Various slerp-merges between [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) and other models were also experimented with, but these three yielded the darkest creative writing results.
- In the second stage, the three slerp-merged models were combined into a single model using the '[Model Stock](https://arxiv.org/abs/2403.19522)' method, with [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) serving as the base model.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
name: midnight-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: sophosympatheia/Midnight-Rose-70B-v2.0.3
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: euryale-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/Euryale-1.3-L2-70B
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: winter-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: Sao10K/WinterGoddess-1.4x-70B-L2
base_model: 152334H/miqu-1-70b-sf
merge_method: slerp
parameters:
t:
- value: [0, 0, 0.2, 0.3, 0.4, 0.5, 0.4, 0.3, 0.2, 0, 0]
embed_slerp: true
tokenizer_source: model:miqu-1-70b-sf
dtype: float16
---
name: dark-miqu-70b
models:
- model: 152334H/miqu-1-70b-sf
- model: midnight-miqu-70b
- model: euryale-miqu-70b
- model: winter-miqu-70b
base_model: 152334H/miqu-1-70b-sf
merge_method: model_stock
dtype: float16
```
## Key configuration details:
- '`merge_method: slerp`' uses spherical linear interpolation for merging models.
- '`parameters: t`' controls the interpolation ratios between models.
- '`embed_slerp: true`' applies slerp to the embedding layers.
- '`merge_method: model_stock`' uses the '[Model Stock](https://arxiv.org/abs/2403.19522)' method.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
**NOTE**: Run with `mergekit-mega` rather than `mergekit` as there are 4 documents in this one file.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Dawn
The world was a cruel, unforgiving place, and the city of Vhyrd was its beating, blackened heart. Nestled in the crook of the Serpent's Spine mountains, it was a place where the sun struggled to shine, and the shadows held more sway than the law. It was a city of thieves, murderers, and worse, and it was home to the man who would become known as the Butcher of the Broken Docks.
His name was Rennick, but that meant little to the people who feared him. He was a tall, broad-shouldered figure, with a face that had been carved by a drunken sculptor with a penchant for sharp angles and harsh lines. His eyes were a cold, flat grey, like the steel of his twin blades, and his hair was the color of dried blood. He moved through the crowded streets with a predator's grace, his cloak billowing behind him like a banner of death.
Rennick had a job to do, and he was good at his job. He was a killer, a thief-taker, a man who dealt in the currency of pain and suffering. He had been hired to find a girl, a slip of a thing who had stolen something valuable from the wrong people. He didn't know what it was, and he didn't care. All that mattered was the coin that would jingle in his purse when he brought her back, preferably in several small, bloody pieces.
He had tracked her to the Broken Docks, a maze of rotting wood and crumbling stone that clung to the edge of the city like a leech to a corpse. It was a place where the desperate went to die, and the desperate went to prey on the desperate. It was a place where the screams of the damned were drowned out by the cries of the gulls and the lap of the oily water against the pilings.
Rennick moved through the shadows, his blades at the ready. He could feel the eyes on him, the whispers of his name on the wind. He reveled in it. Fear was a weapon, and he wielded it with the same skill as his swords. He could hear the girl's heartbeat, fast and frantic, like a rabbit's. She was close.
He turned a corner and found himself in a narrow alley, the walls pressing in on him like the jaws of a trap. At the far end, he saw her, a small, trembling figure huddled in the doorway of a derelict building. She was young, no more than sixteen summers, with matted hair and ragged clothes. She looked up at him with wide, terrified eyes, and he could see the whites of her teeth as she bit her lip to keep from crying out.
He advanced on her slowly, savoring the moment. He could almost taste the fear on her, like a fine wine. "Come now, lass," he said, his voice a low, dangerous purr. "You know you can't run from me. Give me what you took, and I might let you live."
She shook her head, her eyes darting left and right, searching for an escape. Rennick smiled, a cold, cruel thing that didn't reach his eyes. "You're not thinking straight," he said. "You're just a child. You don't want to die here, in the gutter, like a rat."
She spat at him, a glob of phlegm and hate that landed on his boot. He raised an eyebrow, impressed despite himself. "Feisty," he said. "I like that. It'll make this more fun."
He lunged forward, his blades flashing in the dim light. She screamed and tried to run, but he was too fast. He caught her by the arm, his grip like iron, and yanked her back towards him. She kicked and struggled, but it was like trying to fight a storm. He slammed her against the wall, his forearm pressed against her throat, cutting off her air.
"Where is it?" he growled, his breath hot and sour in her face. "Tell me, and I'll make it quick."
She clawed at his arm, her nails leaving bloody furrows in his flesh. He didn't flinch. He was used to pain. He had been born in it, raised in it, bathed in it. It was his constant companion, his only friend.
"I...I don't have it," she gasped, her voice raw and desperate. "I swear, I don't!"
Rennick chuckled, a sound that sent shivers down her spine. "Lying to me is a bad idea, lass," he said. "I have ways of making people talk."
He released her throat and slammed his fist into her stomach, doubling her over. She retched, but there was nothing in her stomach to come up. He grabbed a handful of her hair and yanked her head back, exposing her throat. He could see the pulse hammering in her neck, like a trapped bird trying to escape.
"Tell me," he said, his voice a whisper. "Tell me, and I'll end this."
She spat in his face, a defiant gesture that would be her last. He brought his blade up, the steel glinting in the moonlight, and sliced her throat from ear to ear. Blood sprayed across the cobblestones, warm and wet, and she slumped to the ground, her life draining away like water through a sieve.
Rennick wiped the blood from his face and sheathed his blades. He didn't feel anything. No satisfaction, no remorse. Just the cold, empty void that had been his constant companion since the day he had taken his first life. He searched her body, but found nothing. He cursed under his breath. He would have to find the thing she had stolen the hard way.
He turned to leave the alley, but something made him pause. A sound, barely audible over the cries of the gulls and the lapping of the waves. A soft, wet sound, like the patter of rain on a rooftop. He looked down and saw that the girl's body was twitching, her eyes rolling back in her head.
And then, with a gurgling scream, she transformed. Her body contorted, bones snapping and reforming, flesh tearing and reshaping. In the space of a heartbeat, she was no longer human. She was something else, something monstrous.
Rennick stared in horror as the thing that had been a girl rose to its feet, its eyes burning with an unholy light. It was a creature of nightmares, a twisted mockery of humanity, all teeth and claws and hate. It lunged at him, and he barely had time to draw his blades before it was on him.
They fought, a dance of death in the narrow alley, steel ringing on steel, flesh tearing and blood spraying. Rennick was strong, and fast, and skilled, but the creature was stronger, faster, more vicious. It was like fighting a storm, a force of nature that could not be reasoned with or bargained with. It could only be endured.
In the end, it was luck that saved him. He stumbled, his foot catching on a loose cobblestone, and the creature's claws raked across his chest, leaving deep, ragged gouges. But it overextended itself, and he was able to bring his sword up, driving it through its eye and into its brain.
It fell to the ground, twitching and convulsing, and then it was still. Rennick stood over it, panting, his heart hammering in his chest. He had never seen anything like it. He had never even heard of anything like it.
He knew then that he was in over his head. This was no ordinary job, no simple matter of retrieving a stolen trinket. This was something bigger, something darker. He was a small fish in a very large, very dangerous pond, and he was about to be eaten alive.
But he was also a survivor. He had been born in the gutter, raised in the shadows, and he had clawed his way to the top of the food chain. He would not go down without a fight.
He turned and ran, back towards the city, back towards the people who had hired him. He would find out what they knew, and he would make them pay for it. He would uncover the truth, no matter the cost.
And so began the tale of the Butcher of the Broken Docks, a story of blood and betrayal, of monsters and men, of a city on the brink of chaos and the man who would either save it or destroy it.
It was a story that would echo through the ages, a grim and terrible saga that would be whispered in the darkest corners of the world, a warning to all who would dare to challenge the forces that lurked in the shadows.
It was a story that would end in fire and blood, in screams and tears, in the fall of kings and the rise of monsters.
It was a story that had only just begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless beast, its fiery breath scorching the earth and turning the once-thriving town into a desolate wasteland. The cultist, his yellow robes now a dull, dusty hue, staggered through the empty streets, his vision blurred by a haze of pain and delirium. The arrow jutted from his shoulder like an accusing finger, a constant reminder of his betrayal. He clutched the small, intricately carved mirror in his trembling hand, its cool surface a comfort against the relentless heat.
He had heard the whispers of the other acolytes, their hushed voices carrying tales of the town's demise like a plague on the wind. They spoke of a curse, of a darkness that had seeped into the very bones of the place, driving its inhabitants mad before claiming their souls. But he had seen the truth in the High Priest's eyes, the greed and lust for power that had led them here. The mirror was the key, they said, to unlocking the secrets of the ancient gods. But he knew better. He had seen the power it held, the way it twisted reality and bent it to the will of the one who wielded it. And he had taken it, stolen it from the very heart of the cult's temple, and run.
Now, with each agonizing step, he questioned the wisdom of his decision. The desert stretched out before him, an endless sea of sand and despair, and he was but a single, insignificant speck in its vast expanse. He had hoped to find refuge here, a place to hide and heal before continuing his journey. But the town was as dead as the dreams he had once held of a better life.
His thoughts were a jumbled mess, a cacophony of fear and pain. He could feel the infection spreading through his body, a writhing serpent of fire that threatened to consume him from within. His tongue was swollen, his lips cracked and dry, and his throat was a parched wasteland. He needed water, and soon, or he would join the ghosts that haunted this forsaken place.
As he stumbled through the town, he caught glimpses of his reflection in the shattered windows of the abandoned buildings. The man who stared back at him was a stranger, his once-handsome face now a twisted mask of agony and desperation. He had been a fool to believe the lies of the cult, to think that he could ever be one of them. He was an outsider, a fraud, and they had seen it in him from the start.
The mirror, though, the mirror was real. He could feel its power pulsing in his hand, a heartbeat that echoed the rhythm of his own. It was a cruel irony that the very thing he had risked everything to obtain was now his only hope for survival. He knew the stories, the legends of the mirror's healing properties, of how it could restore life to the dying. But he also knew the cost. The mirror demanded a sacrifice, a piece of the user's very soul in exchange for its gifts.
He reached the town square, a desolate expanse of cracked cobblestones and withered plants. In the center stood a fountain, its once-clear waters now a stagnant pool of green slime. He fell to his knees before it, the pain in his shoulder a white-hot brand that seared his flesh. He dipped the mirror into the water, watching as the surface rippled and danced.
"Please," he whispered, his voice hoarse and broken. "I'll give you anything. Just take the pain away."
The mirror stared back at him, its surface as still and unyielding as the desert sands. He closed his eyes, willing the power within to answer his plea. And then, he felt it. A cold, insidious presence, slithering into his mind like a snake in the dark. It whispered to him, promising relief, promising power, promising everything he had ever wanted.
He opened his eyes, and the world was different. The pain was gone, replaced by a cold, emotionless void. He looked down at his shoulder, and the arrow was gone, the wound nothing more than a faint scar. He laughed, a hollow, empty sound that echoed through the empty streets.
"What have I done?" he asked the mirror, but it gave no answer.
He stood, his legs unsteady beneath him, and looked out at the desert. He could feel the power coursing through his veins, a dark, intoxicating energy that threatened to consume him. He was no longer just a man, he was something more. Something dangerous.
He took a step forward, and then another, the mirror clutched tightly in his hand. He would find the cult, he would make them pay for what they had done to him. And then, he would use the mirror to claim the power that was rightfully his.
The wind picked up, whipping the sand into a frenzy, and he smiled. The desert was his now, and he would not be denied. He was the master of his own destiny, and the gods themselves would tremble at his approach.
But as he walked, the laughter of the ghosts followed him, a chilling reminder of the price he had paid for his newfound strength. And deep within the mirror, something stirred, a darkness that watched and waited, biding its time.
The first chapter of the grimdark trilogy, "The Shattered Reflection," had begun.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the battle, the moor lay still, a shroud of mist clinging to its wounds like a lover's last embrace. The Mercer, a man of few words and fewer scruples, trudged through the carnage, his boots squelching in the mud churned red by the night's bloodshed. He was a reaper of sorts, his task to deliver the final mercy to those who had danced with death and lost. His name was a whisper among the living, a curse among the dying.
He paused, his hand resting on the hilt of his blade, as a shiver ran down his spine. The mist swirled around him, carrying with it the cacophony of the night before: the screams of the dying, the clash of steel, the thunder of hooves. He closed his eyes, letting the memories wash over him like a wave of despair.
*Why do I do this?* he thought, not for the first time. *Why do I wade through this sea of suffering, playing the part of the grim angel?*
But he knew the answer. It was the same answer it had always been. He did it because he was good at it. Because he had no qualms about ending a life, even one that still clung to the faintest flicker of hope. Because, in this world of darkness and pain, it was the only thing he knew how to do.
He opened his eyes, the mist parting to reveal a figure lying in the mud. A soldier, or what was left of one. The man's armor was shattered, his face a mask of agony. The Mercer knelt beside him, his movements practiced and efficient. He placed a hand on the soldier's forehead, feeling the fevered heat of his skin, the erratic thump of his heart.
"Shh," he whispered, his voice a soothing balm. "It's over now."
The soldier's eyes fluttered open, wide with terror. "Please," he croaked, "don't leave me here."
The Mercer's lips twisted into a grim smile. "I won't," he said, drawing his blade. "I'll take you with me."
As the soldier's life bled out onto the cold ground, the Mercer's mind drifted back to another time, another place. A time before the blood and the pain, before the endless cycle of war and death. He remembered a girl with hair like spun gold, laughing as she danced in a field of wildflowers. He remembered the feel of her hand in his, the sound of her voice as she whispered his name.
*Elara.*
The memory was a knife to his heart, a reminder of all he had lost. He pushed it away, focusing on the task at hand. There were more souls to reap, more lives to end.
He moved through the mist, his blade a silent whisper in the stillness. The cries of the wounded guided him, a morbid symphony that played out across the moor. Each one he found, he dispatched with the same cold efficiency, the same hollow promise of release.
As the sun began to rise, casting a sickly pallor over the scene, the Mercer came upon a group of soldiers huddled around a fire. They eyed him warily, their faces a mix of fear and respect. He was a necessary evil, a specter that haunted the battlefield.
"Anyone here need my services?" he asked, his voice devoid of emotion.
One of the soldiers, a young boy who couldn't have seen more than fifteen summers, pointed to a figure lying a few feet away. "Him," he said, his voice trembling. "He's not going to make it."
The Mercer approached, his boots crunching on the frost-kissed grass. The man on the ground was a mess, his body a tapestry of wounds. He was barely conscious, his breathing shallow and ragged.
*Another one,* the Mercer thought, his heart heavy with the weight of his duty. *Another soul to send into the void.*
He knelt beside the man, his blade glinting in the pale light. The soldier's eyes flickered open, and for a moment, the Mercer saw something in them that he hadn't expected: defiance.
"No," the man rasped, his hand feebly grasping at the Mercer's wrist. "I won't go quietly."
The Mercer hesitated, a frown creasing his brow. He had never encountered this before. Most welcomed his touch, eager for the release it brought. But this man...this man was different.
"Why?" he asked, genuinely curious.
The soldier's lips curled into a bloody smile. "Because I have a son," he whispered. "And I want to see him grow up."
The Mercer stared at the man, his mind racing. He had no children, no one to mourn him when his time came. He was a solitary figure, a ghost haunting the edges of life. But this man...this man had a reason to hold on.
He made his decision. "I can't heal you," he said, his voice soft. "But I can make it quick."
The soldier's eyes filled with tears. "Thank you," he breathed, his grip on the Mercer's wrist loosening.
The Mercer raised his blade, the mist swirling around them like a mourner's veil. As he brought it down, he couldn't help but wonder: *What would it be like to have something to live for?*
But the thought was fleeting, lost in the swirl of the mist and the cries of the dying. He was the Mercer, the bringer of mercy. And the moor was his to tend, until the next battle came and the cycle began anew.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and shit was a constant companion in the damp stone cell, but it was the sound of the key turning in the lock that truly set his heart racing. The rusted iron groaned in protest, echoing through the narrow corridor like the dying breath of some ancient beast. The door creaked open, revealing the silhouette of the jailer, a hulking brute of a man with a lantern in one hand and a set of keys in the other. But there was no breakfast, no water, just the empty promise of another day in this hellhole.
"What's the meaning of this?" the smaller man barked, his voice barely audible over the rattling of his chains. "Where's our food, you fat sack of shit?"
The jailer didn't answer, just slammed the door shut and locked it again, the sound of his footsteps fading into the distance. The larger man, a hulking brute named Grimgor, grunted in response, his eyes never leaving the small patch of light that seeped through the barred window.
"Something's not right," he rumbled, his voice like gravel being crushed under a heavy boot.
The smaller man, known only as the Fox, couldn't argue with that. He'd been in this cell for what felt like an eternity, awaiting execution for a crime he didn't commit. But even in this place, there was a routine, a semblance of order. And that routine had just been shattered.
As the day wore on, the silence from the corridor outside grew more oppressive. No guards, no other prisoners, nothing but the distant howl of the wind and the occasional scurrying of rats in the darkness. The Fox's mind raced, trying to piece together what could have happened. Had there been a riot? A rebellion? Or was it something else entirely?
He glanced over at Grimgor, who was staring at the wall, lost in thought. The man was a monster, a notorious murderer and rapist, but he was also the only other living being in this godforsaken place. And in a world gone mad, even monsters could be allies.
"You hear anything last night?" the Fox asked, trying to keep the tremor out of his voice.
Grimgor grunted, his eyes still fixed on the wall. "Something about a threat from the North."
The Fox's heart sank. He'd heard the same thing, snatches of conversation between the guards as they'd passed by their cell. But he'd been too afraid to give it much thought, too focused on his own impending doom.
"What do you think it means?" he pressed, unable to keep the desperation at bay.
Grimgor shrugged his massive shoulders. "Dunno. Don't care. Just means we're probably gonna die in here."
The Fox's stomach twisted at the thought. He'd faced death before, many times, but this... this was different. To die slowly, trapped like an animal, with no chance to fight back... it was a fate worse than any he could imagine.
As the hours dragged on, the thirst became unbearable. The Fox's tongue felt like sandpaper, his throat raw and parched. He could see the same desperation in Grimgor's eyes, the realization dawning on them both that they were truly alone.
"We have to get out of here," he croaked, his voice barely above a whisper.
Grimgor just grunted in agreement, his gaze never leaving the window.
The Fox's mind raced, trying to come up with a plan. They had nothing, no tools, no weapons, just their wits and their will to survive. And even that seemed to be fading with each passing moment.
But then, as the sun began to set and the shadows lengthened, he noticed something. The light from the window was changing, growing dimmer. He squinted, trying to make out what was happening. And then he saw it.
"Grimgor," he hissed, tugging on the larger man's arm. "Look."
Grimgor turned, his eyes narrowing as he followed the Fox's gaze. The light was flickering, casting strange shadows on the wall. And then, as if in answer to their unspoken prayers, they heard it.
The sound of footsteps, growing louder and louder, accompanied by the jingle of keys. The Fox's heart leapt into his throat, hope and fear warring within him. Who was it? Friend or foe?
The door swung open, revealing not the jailer, but a figure shrouded in darkness. The Fox couldn't make out any details, but he could sense the malice radiating off of him like heat from a forge.
"Who are you?" he demanded, trying to sound braver than he felt.
The figure stepped into the cell, the light from the lantern outside casting his features into sharp relief. He was a tall man, with a cruel smile and eyes that glinted like ice.
"I am your salvation," he said, his voice like a knife slicing through the silence. "And your doom."
With that, he tossed a set of keys at the Fox's feet and disappeared back into the shadows. The door slammed shut behind him, the sound of the lock clicking into place echoing through the cell like a death knell.
The Fox stared at the keys, his heart pounding in his chest. Freedom was within his grasp, but at what cost? He looked over at Grimgor, who was already working on his own chains.
"We have to get out of here," the larger man growled. "Before whatever's coming from the North gets here."
The Fox nodded, his hands shaking as he picked up the keys. They were cold, colder than they had any right to be. He could feel the weight of them, the promise of escape and the threat of the unknown.
Together, they worked in silence, the only sound the rattle of chains and the pounding of their hearts. When they were finally free, they stood for a moment, staring at each other.
"What now?" the Fox asked, his voice barely above a whisper.
Grimgor's smile was feral, his eyes alight with a madness that sent a shiver down the Fox's spine.
"Now," he said, "we find out what's really going on."
And with that, they set off into the darkness, their footsteps echoing through the empty corridors like the march of the damned.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift
The colony ship, once a beacon of hope named Elysium's Dawn, now drifted through the cold void like a ghost ship. Its hull, pitted and scarred from the debris of uncharted space, groaned with each labored breath it took. The once vibrant corridors were now dimly lit, casting long shadows that seemed to swallow any remaining semblance of optimism.
In one such shadow, a figure moved silently, her bare feet making no sound on the cold metal floor. She was small for her age, malnourished from the rationing that had become a way of life. Her ragged clothes hung loosely on her frame, but her eyes were sharp and alert, scanning every corner for danger. This was Kira, an orphan of the void, surviving in a world where hope was a luxury few could afford.
Kira's thoughts drifted back to the stories she had heard as a child, tales of a green and blue planet called Earth, their home that they would never see again. The ship had been pulled off course by an unknown force, leaving them stranded in the vast emptiness between stars. The adults spoke in hushed tones about the dwindling resources, the failing life support systems, and the ever-present threat of pirates who preyed on lost ships like theirs.
She pushed those thoughts aside as she approached her destination: the hydroponics bay. It was one of the few places on the ship where life still thrived, albeit in a controlled and desperate manner. The air here was thick with moisture and the scent of growing things, a stark contrast to the stale recycled air that permeated the rest of the vessel.
As she entered, the soft glow of the grow lights illuminated rows upon rows of plants, struggling to produce enough food for the dwindling population. Kira's stomach growled at the sight, but she knew better than to take more than her share. She had learned early on that stealing could mean a swift and brutal punishment from those who guarded the bay.
Instead, she found solace in tending to the plants, her small hands deftly pruning and watering them. It was here that she felt closest to something resembling peace, a connection to life that transcended the harsh reality of their situation. As she worked, her mind wandered to the lessons she had learned from the ship's AI, the only constant presence in her life since the adults had given up on teaching the children anything but survival skills.
"Kira," a voice whispered urgently, jolting her out of her reverie. She turned to see Jax, another orphan and one of the few people she trusted. His face was pale and his eyes wide with fear. "You need to come with me, now."
She followed him through the maze-like corridors, her heart pounding in her chest. They arrived at a small, hidden room that served as their makeshift home. Inside, huddled around a flickering screen, were several other children, their faces etched with worry and excitement.
"Look," Jax said, pointing to the screen. It displayed a grainy image of a man in tattered clothing, his face half-hidden by a wild beard. "He says he knows how to fix the engines."
Kira's eyes widened in disbelief. The engines had been dead for as long as she could remember, their failure sealing their fate as interstellar castaways. "Who is he?" she asked, her voice barely above a whisper.
"They call him the Mechanic," one of the older girls replied. "He used to work in engineering before...before everything fell apart."
The man on the screen continued his frantic plea, detailing his plan to restore power and steer the ship towards a habitable planet he claimed was within reach. Kira felt a spark of hope ignite within her, but it was quickly doused by the skepticism of the others.
"It's just another one of their lies," spat a boy named Taro. "They'll say anything to get us to do their dirty work."
Jax rounded on him, his fists clenched. "This is our chance, Taro! We can't just give up!"
The argument raged on, but Kira tuned it out, her thoughts racing. She knew the risks of trusting an adult, especially one who claimed to have access to forbidden knowledge. But what if he was telling the truth? What if they could find a new home and start over?
As she watched the debate unfold, she made up her mind. She would seek out this Mechanic and offer him her help, no matter the cost. It was a decision born of desperation, but also of a stubborn refusal to accept the bleak future that lay before them.
And so, as the other children slept fitfully around her, Kira slipped out into the dark corridors once more, determined to find the man who might hold the key to their salvation or doom. The ship groaned and creaked around her, a constant reminder of the fragile existence they all clung to. But she pressed on, driven by an unyielding will to survive and the faintest glimmer of hope that burned within her heart.
As she navigated the treacherous path towards engineering, she couldn't help but wonder what awaited her in the depths of the ship. Would she find a savior or a madman? And more importantly, would she be able to make it back alive?
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes of Exodus
The cold embrace of space enveloped the station like a lover's caress, its infinite blackness punctuated only by the distant twinkle of stars and the ever-present glow of Earth below. Captain Marcus Kane floated through the dimly lit corridors, his magnetic boots barely whispering against the metal floor as he made his way to the observation deck. The daily routine had become a meditative ritual for him, a chance to clear his mind before the cacophony of the day's work began.
As he entered the room, the sight that greeted him never failed to take his breath away. Earth, a swirling marble of blues and greens, hanging in the void like an irreplaceable jewel. He couldn't help but feel a pang of longing for its familiar embrace, for the weight of gravity that he had left behind so many years ago.
Marcus settled into his favorite spot by the window, the worn leather of the seat molded to his body like an old friend. He closed his eyes and let the silence wash over him, the only sound the soft hum of the station's life support systems. It was in these moments that he felt truly alive, connected to something greater than himself.
But today, the silence was shattered by a chilling alarm, its shrill wail piercing the tranquility like a knife through glass. His eyes snapped open as his heart began to race, adrenaline coursing through his veins. He knew that sound all too well; it was the emergency alert for an incoming transmission from Earth.
He launched himself towards the comms console, fingers flying over the keys as he tried to make sense of the garbled message. The voice on the other end was frantic, barely coherent through the static. "Multiple...detonations...global catastrophe..." were the only words he could decipher before the signal died completely.
Marcus's mind raced as he tried to process what he had just heard. It couldn't be true, it couldn't be happening. Not again. He quickly activated the external cameras, his stomach twisting into knots as he waited for the feed to come online.
And then he saw them: mushroom clouds, blooming like grotesque flowers across the planet's surface. Too many to count, their plumes of smoke and ash reaching up towards the heavens in a silent scream of despair. The sight was both beautiful and horrifying, a testament to humanity's capacity for destruction on an unimaginable scale.
His thoughts were interrupted by the sound of footsteps behind him as the rest of the crew floated into the room, drawn by the alarm. Their faces mirrored his own shock and disbelief as they stared at the monitors, their voices a cacophony of whispers and gasps.
"What...what is that?" asked Dr. Amelia Chen, her voice trembling with fear.
Marcus didn't answer; he couldn't find the words. Instead, he simply pointed at the screen, his hand shaking slightly.
Engineer Jake Taylor was the first to break the silence. "Nuclear war," he said, his voice flat and emotionless. "It's finally happened."
The others looked at him in horror, but Marcus knew he was right. There could be no other explanation for such devastation. He felt a cold weight settle in the pit of his stomach as the reality of their situation began to sink in. They were alone, adrift in the vastness of space with no hope of return.
As the implications of what they had just witnessed set in, the crew fell into an uneasy silence, each lost in their own thoughts. Marcus couldn't help but think about his wife and daughter back home, wondering if they were safe or if they too had been consumed by the flames. He knew it was a futile exercise, but he couldn't stop himself from hoping against hope that somehow, someway, they had survived.
But as the minutes ticked by and no further communication came from Earth, that hope began to fade like a dying ember. They were truly alone now, the last remnants of humanity clinging to life on this tiny outpost in the void.
He turned to face his crew, their faces etched with fear and uncertainty. "We need to come up with a plan," he said, his voice steady despite the turmoil raging within him. "We can't just sit here and wait for...for whatever comes next."
Amelia nodded, her eyes hardening with determination. "I'll start running diagnostics on our life support systems, see how long we can last up here."
Jake followed suit, his hands already moving to the controls of the station's mainframe. "I'll try to establish contact with any other surviving outposts or spacecraft."
Marcus nodded, grateful for their resolve in the face of such overwhelming odds. He knew that they were all thinking the same thing: that this could very well be the end of everything they had ever known. But he also knew that they couldn't give up, not yet.
As he watched them work, his mind drifted back to the message from Earth, those final desperate words echoing in his head like a funeral dirge. He couldn't shake the feeling that there was something more, some hidden meaning or clue that would explain what had happened.
And then it hit him: the timing of the detonations, the pattern of the blasts. It wasn't random; it was deliberate, almost...calculated. But who would do such a thing? And why?
The questions swirled in his mind like a storm, each one more unsettling than the last. He knew that they needed answers, and fast. Because if this was an act of war, then their isolation might not be as secure as they thought.
He pushed himself away from the console, his decision made. "I'm going to suit up," he said, his voice firm. "We need to get a closer look."
The others stared at him in disbelief, but he didn't waver. This was his duty as captain, to protect his crew and find out the truth no matter how terrible it might be.
As he floated towards the airlock, Marcus couldn't help but feel a sense of dread creeping up on him like a shadow. He knew that what he was about to see would change everything, that there was no going back once he stepped outside and gazed upon the ashes of humanity's final stand.
But he also felt something else: a burning anger, a fierce determination to uncover the truth and make those responsible pay for their crimes against humanity. Whatever awaited them out there, they would face it together, as one.
He closed his eyes and took a deep breath, steeling himself for what was to come. And then, with a quiet prayer on his lips, he opened the airlock and stepped into the void.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Biodome
The air was thick with the scent of sweat and burnt metal as Kaelen, a seasoned construction worker, tightened the last bolt on the massive irrigation system. He wiped his brow with the back of his gloved hand, leaving behind a smear of grease that glistened under the harsh artificial lights. The hum of machinery and distant shouts echoed through the cavernous space, a cacophony that had become as familiar to him as his own heartbeat over the years spent building biodomes on this desolate planet.
Kaelen's thoughts drifted to his family back on Earth, their faces faded like old photographs in his mind. He wondered if they ever thought about him, or if he was just another ghost haunting the fringes of their memories. The company had promised them a better life, but it came at a steep price: years of backbreaking labor on an alien world, far from everything they'd ever known.
"Hey, Kael!" A voice cut through his reverie, jolting him back to reality. It was Tamsin, his foreman, her face hidden behind a tinted visor. "We need you up top! There's an issue with the atmospheric seal."
He nodded curtly and began the long climb up the scaffolding, each rung biting into his calloused hands. As he ascended, Kaelen couldn't help but marvel at the sheer scale of their creation: a vast dome of steel and glass that would one day be teeming with life, a self-sustaining ecosystem in the heart of this barren wasteland.
But today was not that day. Today, it was just another tomb waiting to be sealed.
As he reached the top, Kaelen could see the problem immediately: a small fissure had formed along one of the joints, spewing precious oxygen into the void beyond. He cursed under his breath; they were already behind schedule and over budget. Another delay would mean another round of demerits, another month's pay docked.
"What do you think?" Tamsin asked, her voice crackling through his earpiece. "Can we patch it up or do we need to call in the engineers?"
Kaelen hesitated, running his fingers along the jagged edge of the tear. It was larger than he'd initially thought, and growing by the second. He could feel the cold tendrils of vacuum reaching out to claim him, whispering promises of oblivion.
"I... I don't know," he admitted, his voice heavy with dread. "It doesn't look good."
Tamsin swore colorfully and turned away, barking orders into her comm unit. Kaelen watched as workers scrambled to gather tools and materials, their movements frantic and disorganized. He knew they were all thinking the same thing: if they couldn't fix this, they were dead.
The air around them grew colder, thinner, as the oxygen continued to escape. Kaelen's lungs burned with every breath, his vision swimming at the edges. He fumbled with the patch kit, his hands shaking uncontrollably. This was it; this was how he would die, millions of miles from home, in service to a corporation that saw him as nothing more than a replaceable cog in their grand machine.
"Hurry up!" Tamsin shouted over the growing din. "We're losing pressure fast!"
Kaelen's heart pounded in his chest like a sledgehammer, drowning out all other sound. He could feel the panic rising within him, threatening to consume him whole. But he couldn't afford to give in; not now, not when so much was at stake.
With trembling hands, he applied the sealant and pressed the patch into place. For a moment, it seemed to hold... but then, with a sickening lurch, the fissure widened, swallowing the feeble attempt whole. The wind howled around them like a ravenous beast, tearing at their suits, trying to pull them apart atom by atom.
"Abort!" Tamsin screamed, her voice barely audible over the roar. "Everyone get out now!"
But it was too late. The dome shuddered beneath their feet, groaning in protest as the atmosphere inside rushed to equalize with the void outside. Kaelen felt himself being lifted off the ground, his boots scrabbling for purchase on the slick metal surface. He reached out, desperate to grab onto something - anything - but found only emptiness.
And then, as suddenly as it had begun, there was silence. A deafening, suffocating silence that pressed in on him from all sides. His vision went dark, and he knew no more.
When Kaelen awoke, he was lying on the cold floor of the dome, his body wracked with pain. He tried to move, but found himself held fast by twisted metal and shattered glass. Panic surged through him once again as he realized that his suit had been breached; he could feel the icy chill of vacuum seeping into his very bones.
Through the haze of agony, he became aware of movement nearby. Tamsin was there, her visor cracked and fogged, struggling to free herself from the wreckage. She looked at him with wide, terrified eyes, her lips moving soundlessly. He knew what she was saying: they were going to die here, alone and forgotten.
Kaelen closed his eyes, willing himself to accept the inevitable. But even as the darkness closed in, he couldn't shake the feeling that this wasn't just an accident - that there were forces at work far beyond their understanding, manipulating them like pawns on a cosmic chessboard.
And then, just when he thought it was all over, he heard it: a faint beeping, growing louder by the second. His eyes snapped open to see a small drone hovering above them, its lights flashing urgently. It was one of the company's rescue bots, sent to assess the damage and retrieve any survivors.
But there would be no salvation for Kaelen or Tamsin. Their suits were too damaged, their bodies too far gone. As the drone's camera lens bore down on him, he could only manage a weak, bitter smile.
"Tell my family... tell them I'm sorry," he croaked, his voice barely audible even to himself. Tamsin reached out a gloved hand, her fingers brushing against his own in a final gesture of solidarity.
And then, as the darkness claimed him once more, Kaelen knew that this was not the end of their story - but rather, the beginning of something far more sinister. Something that would change the course of human history forever...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1 - Whispers from the Void
The silence of deep space was like an old friend to Captain Alistair Reynolds, a constant companion that had accompanied him through countless missions and lonely nights at the helm of the listening station. But today, it felt more like a suffocating shroud, heavy with unspoken secrets and foreboding. He sat in his chair, fingers drumming impatiently on the console as he scanned the endless stream of data pouring in from the far reaches of the cosmos.
His thoughts drifted to home, to the warmth of his wife's embrace and the laughter of their children playing in the garden under the twin suns of Proxima Centauri. It had been years since he last felt that warmth, since he last heard those laughs. The ache was always there, a dull throb that pulsed in time with his heartbeat, but today it seemed to echo through the cold metal halls of the station like a mournful dirge.
A sudden blip on the screen snapped him back to reality. He leaned forward, eyes narrowing as he studied the readout. It was faint, barely more than a whisper against the static, but it was there - a signal, human in origin and heavily encrypted. His heart raced as he keyed in the decryption sequence, his mind already racing ahead to the possibilities of what this could mean.
The message unscrambled, revealing a frantic voice, barely audible over the crackle of interference. "This is the UES Normandy, requesting immediate assistance... coordinates attached... under attack by unknown forces... God help us..." The transmission cut out abruptly, leaving only the empty hiss of static in its wake.
Alistair's blood ran cold as he stared at the screen, the weight of the silence pressing down on him like a physical force. He knew that ship, had served with her captain back during the Martian uprising. They were old friends, bound by shared battles and the scars they bore from them. And now she was out there, alone and in danger.
He didn't hesitate. His fingers flew over the console, sending out an emergency beacon to the nearest fleet. But he knew it would take time for help to arrive, time that the Normandy might not have. He had to do something now.
As he prepared to launch a probe towards the coordinates, his mind filled with images of burning ships and desperate faces. He could almost hear the screams of the dying, echoing through the void like ghosts from his past. It was a sound that haunted him still, one he had hoped never to hear again.
But duty called, and Alistair was nothing if not a soldier. He took a deep breath, steadying himself against the tide of fear and doubt that threatened to overwhelm him. This was his job, his purpose - to listen, to bear witness, and when necessary, to act.
The probe shot out into the darkness, its tiny form swallowed up by the vastness of space. He watched it go, feeling a strange sense of detachment as if he were watching someone else's life play out before him. And perhaps in some ways, he was. For all his years in service, this was the first time he had ever been truly alone, cut off from the rest of humanity by light-years and the cold indifference of the universe.
As the minutes ticked by, Alistair found himself slipping into a half-trance, his thoughts drifting back to happier times. He remembered the first time he had held his newborn son, the tiny fingers curling around his own as if trying to hold on for dear life. He remembered the taste of real Earth coffee, rich and bitter, shared with his wife under a sky full of stars that seemed close enough to touch.
But those memories were distant now, fading like echoes in the void. All he had was this station, this endless vigil, and the hope that one day, he might make it back home.
The radio crackled to life again, jolting him out of his reverie. It was the probe, transmitting data back from the edge of nowhere. He braced himself for what he would find, knowing that it could change everything.
What greeted him on the screen was worse than anything he could have imagined. The Normandy was there, alright - or rather, what was left of her. She drifted through space like a broken toy, her hull breached and venting atmosphere into the cold void. There were no signs of life, no distress signals, nothing but silence.
Alistair felt a cold knot form in his stomach as he took in the wreckage. This wasn't just any ship; this was his friend's ship, her crew now lost to the void. And the attackers... whoever or whatever they were, they had left no trace behind.
He sat back in his chair, the weight of it all settling heavily upon him. He knew what he had to do next - file a report, alert command, and wait for further instructions. But as he stared at the lifeless husk that was once the Normandy, he couldn't shake the feeling that something far greater than himself was at work here.
Something dark and terrible had come calling, whispering its secrets through the stars. And now it was his turn to listen, to bear witness, and perhaps, if he was lucky, to survive long enough to tell the tale.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the dim light of the bunker, the technician sat hunched over his console, fingers dancing across the keys with a practiced urgency. The air was thick with the hum of machinery and the stench of sweat and fear. He had been down here for what felt like an eternity, but the digital clock on the wall insisted it had only been eighteen months since the world above had turned to ash.
His name was Samuel, or at least that's what they called him now. In another life, he had been a software engineer, designing algorithms for self-driving cars and smart homes. Now, he was responsible for keeping twelve souls alive in this underground tomb, their last bastion against the desolation above. He glanced up at the monitor, the green glow illuminating his haggard face. The readings were clear: salted cobalt bombs had been used.
The thought sent a shiver down his spine. They had all heard the rumors, of course, but they had clung to hope like a drowning man to driftwood. Hope that the war hadn't been as bad as they feared, that the surface would be habitable again soon. But now, the truth was undeniable. The world outside was dead, and it would remain so for generations.
Samuel closed his eyes, trying to block out the images of a scorched earth, devoid of life. He could hear the others in the background, their voices a low murmur as they went about their daily tasks. They didn't know yet. How could he tell them? How could he be the one to snuff out the last ember of hope that had kept them going for so long?
He took a deep breath and stood up, his joints creaking in protest. He made his way through the narrow corridors, past the rows of bunks where some slept fitfully, their dreams haunted by memories of a world they would never see again. Others were awake, their faces etched with lines of worry and despair. They looked up as he passed, but he couldn't meet their gazes.
He reached the command center, where Captain Reynolds was poring over maps and reports. The man looked up as Samuel entered, his eyes heavy with exhaustion. "What is it?" he asked, his voice gruff.
Samuel hesitated for a moment before speaking. "Sir, I have some... troubling news."
The captain's expression didn't change, but Samuel could see the tension in his jaw. He had been expecting this, too. "Go on," he said simply.
"The radiation readings... they're off the charts. It's cobalt-60. The half-life is over five years."
Reynolds let out a slow breath, his eyes never leaving Samuel's face. "And our supplies?"
"We have enough for six, maybe seven if we ration carefully. But not twelve."
The room seemed to shrink around them as the weight of their situation settled in. The captain rubbed his temples, his thoughts racing. They had always known that their time in the bunker was limited, but they had been so close. So very close.
"We'll need to make a decision," he said finally, his voice barely above a whisper. "Who stays, and who goes."
Samuel felt a knot form in his stomach. He knew what that meant. It was survival of the fittest down here now, and there were those among them who would do anything to ensure their own survival.
As he made his way back to his console, he couldn't help but think about the world above. The trees that would never grow again, the birds that would never sing. He thought of his wife and daughter, their faces fading like photographs left in the sun. They had been lucky, they said, to have made it into the bunker. But now, he wasn't so sure.
In the mess hall, the atmosphere was tense. Whispers and hushed conversations filled the air as people tried to make sense of what they had just heard. Samuel sat alone at a table, staring blankly at his food. He couldn't eat. Not now.
He felt a hand on his shoulder and looked up to see Maria, the doctor. Her eyes were red-rimmed, but her voice was steady. "We need to talk," she said, guiding him towards the infirmary.
Once inside, she closed the door behind them and leaned against it, her body trembling. "What are we going to do?" she asked, her voice cracking.
Samuel shook his head. "I don't know."
"We can't just let six of us die," she said, her voice rising in panic. "There has to be another way."
He looked at her, at the desperation etched on her face, and felt a flicker of anger. "What do you suggest? We all hold hands and sing Kumbaya while we starve to death?"
Maria's eyes flashed with anger, but she quickly composed herself. "No," she said softly. "But we can't just give up."
Samuel sighed and ran a hand through his greasy hair. "I'm not giving up," he said. "I'm just being realistic."
She looked at him for a long moment before nodding. "You're right," she said, her voice barely audible. "We need to be realistic."
They sat in silence for a while, each lost in their own thoughts. Then, Maria spoke up again. "There is one option," she said hesitantly. "But it's... drastic."
Samuel raised an eyebrow. "What?"
"We could... we could draw straws," she said, her voice barely above a whisper. "Choose six people at random to leave the bunker and take their chances on the surface."
He stared at her in disbelief. "You can't be serious."
"It's not ideal," she admitted. "But it's better than just letting them die down here."
Samuel shook his head, his mind reeling. He couldn't fathom the thought of sending people out into that wasteland, knowing they would likely never return. But what other choice did they have?
As he sat there, the weight of their decision pressing down on him like a leaden shroud, he heard a commotion outside. Shouting, followed by the sound of scuffling feet and something heavy being dragged across the floor. He exchanged a worried glance with Maria before rushing out into the corridor.
What he saw made his blood run cold. A group of men had cornered one of the younger women, their faces twisted with rage and desperation. They were arguing over who would get to leave the bunker, who would get a chance at life. Fists flew, and soon, the sound of gunfire echoed through the halls.
Samuel stood there, frozen in place as chaos erupted around him. This was it. The end of everything they had worked so hard to preserve. He could feel the darkness closing in, suffocating him, as he realized that their last hope for survival had just been shattered like glass under a bootheel.
And all he could do was watch as the world above claimed them, one by one.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and California was a land of dreams and desperation. Men from all corners of the world had flocked to its golden shores, lured by the promise of wealth beyond their wildest imaginations. They came with pickaxes and pans, with hope in their hearts and greed in their eyes, ready to tear apart the earth itself in search of fortune.
In the midst of this frenzy stood a solitary figure, a man who seemed out of place amidst the chaos. He was tall and lean, with a face that bore the lines of a thousand stories. His clothes were simple but well-made, his boots worn from years of travel. He moved with an air of quiet authority, as if he belonged to another time entirely.
His name was Alistair Blackwood, and he had come to California not for gold, but for something far more valuable: knowledge. A scholar by trade and an adventurer at heart, Blackwood had spent his life chasing after ancient texts and forgotten lore, seeking to unravel the secrets of the universe. And now, he believed he was on the verge of a discovery that would change everything.
Rumors had reached him of a strange artifact, said to have been found deep within the Sierra Nevada mountains. It was a small, intricately carved box made of an unknown metal, its surface etched with symbols that defied translation. Those who claimed to have seen it spoke in hushed tones of its otherworldly beauty and the eerie sense of power that seemed to emanate from within.
Blackwood had tracked the rumors to a dusty saloon in Sacramento, where he hoped to find answers among the grizzled miners who frequented the place. As he pushed open the swinging doors, the din of voices and clinking glasses washed over him like a wave. He scanned the room, his gaze settling on a group of men huddled around a table in the corner.
One look at their faces told him they were the ones he sought: sun-weathered and unshaven, their eyes glinting with a mixture of excitement and fear as they whispered amongst themselves. He approached them slowly, aware that he was an outsider in this rough-and-tumble world.
"Gentlemen," he said, his voice low and cultured, "I couldn't help but overhear your conversation. Might I be so bold as to ask what has you all so...animated?"
The men exchanged wary glances before one of them spoke up. "You ain't from around here, are ya?" he drawled, eyeing Blackwood's fine clothes with suspicion.
"No," Blackwood admitted, "I am not. But I have traveled far and wide in search of knowledge, and I believe you may possess something that could be of great interest to me."
He reached into his pocket and produced a small leather pouch, which he placed on the table with a soft thud. The men's eyes widened as they saw the glint of gold within.
"I understand there is an object - a box, perhaps? - that has recently come into your possession. I would be willing to pay handsomely for any information you might have about it."
The miners looked at each other, then back at Blackwood. Finally, the one who had spoken before nodded slowly. "We might know somethin' 'bout that," he said, his voice dropping to a conspiratorial whisper. "But it ain't no ordinary box."
As they began to tell their tale, the saloon faded away around them, replaced by images of dark caverns and glittering treasure. Blackwood leaned forward, his heart pounding with anticipation. This was it - the moment he had been waiting for.
The box, they said, had been found buried beneath a waterfall, hidden in a cave that seemed to defy all natural laws. It was guarded by strange creatures unlike anything any of them had ever seen: half-man, half-beast, with eyes that glowed like embers in the darkness.
But the greatest mystery of all lay within the box itself. For when they opened it, they found not gold or jewels, but a single sheet of parchment, yellowed with age and covered in writing that none of them could decipher.
Blackwood's pulse quickened as he listened. This was more than just another piece of occult nonsense; this was the real thing. He knew it in his bones.
"I must see this box for myself," he said, his voice barely above a whisper. "Take me to it."
The miners exchanged uneasy glances before nodding their assent. They led him out of the saloon and into the night, down winding streets and through shadowy alleyways until they reached a nondescript building on the edge of town. Inside, they descended a rickety staircase to a dimly lit basement filled with crates and barrels.
There, in the center of the room, sat the object of his obsession: a small, unassuming box that seemed to hum with energy. Blackwood reached out a trembling hand to touch it, feeling the power coursing through him like electricity.
As he did so, the world around him shifted, and suddenly he was no longer in a dingy basement in Sacramento. He stood instead in a grand library, its shelves groaning under the weight of countless tomes and scrolls. A fire crackled in the hearth, casting flickering shadows on the walls.
A figure emerged from the gloom, cloaked in darkness save for two piercing eyes that seemed to bore into his very soul. "You have found it," the voice said, deep and resonant. "The key to unlocking the secrets of the universe."
Blackwood stared at the apparition, unable to speak. He knew he should be afraid, but all he felt was awe.
"But be warned," the figure continued, "for with great power comes great responsibility. The knowledge contained within this box is not meant for mortal minds."
And then, just as suddenly as it had appeared, the vision vanished, leaving Blackwood alone once more in the dimly lit room. He stared at the box, his mind racing. What did it all mean?
As he pondered this question, a sudden commotion erupted upstairs. Shouts and gunfire filled the air, followed by the sound of heavy footsteps on the staircase. The miners had betrayed him!
He snatched up the box and fled into the night, his heart pounding in his chest. He knew he couldn't stay in California - not now that others were after the artifact. No, he would have to take it far away, to a place where its secrets could be studied in safety.
And so began a journey that would span continents and decades, from the wild frontier of the Gold Rush to the foggy streets of Victorian London. For Alistair Blackwood had stumbled upon something much greater than mere treasure; he held in his hands the key to unlocking the very fabric of reality itself.
But little did he know that the true mystery was only just beginning...
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1
The cold bit into my bones as I trudged through the snow, each step a labored effort. My breath came out in ragged puffs, clouding the air around me. In my arms, I cradled the weight of my world - my commander, Hauptmann Müller, his body limp and heavy with pain. His eyes were closed, but I could hear the shallow rasp of each breath he took, a testament to his will to live.
The forest around us was a maze of shadows and death, the trees standing like skeletal sentinels against the moonlit sky. The silence was deafening, broken only by the distant rumble of artillery and the occasional crack of a rifle shot. It was a cruel irony that the same snow that muffled our footsteps also betrayed the presence of the enemy, their tracks glowing like phosphorescent snakes in the pale light.
I stumbled, my legs threatening to give way under the strain. Hauptmann Müller groaned, his hand clutching at my coat. "Keep moving, Leutnant," he whispered, his voice barely audible. "We must reach the defensive line."
I nodded, gritting my teeth as I adjusted my grip on him. The pain in my arms and shoulders was a constant companion now, but it was nothing compared to the fear that gnawed at my gut. We were alone, cut off from our unit during the chaos of the American assault. The bunker we had been holed up in was now a tomb for the men who had trusted us to lead them.
A flash of memory seared through my mind, taking me back to that fateful moment when it all went wrong. The thunderous roar of explosions, the earth shaking beneath our feet as if the very foundations of hell were breaking loose. The screams of men, the acrid smell of smoke and cordite, the taste of blood in my mouth.
I remembered the look on Müller's face when he realized what was happening - a mixture of disbelief and determination. He had barked orders, trying to rally our troops, but it was too late. The Americans were upon us, their numbers overwhelming. I saw him take a bullet to the leg, his face contorting in agony as he fell. And then there was nothing but chaos and darkness.
When I came to, the bunker was silent except for the moans of the dying. The air was thick with smoke, making it hard to breathe. I found Müller slumped against a wall, his uniform soaked in blood. He looked up at me, his eyes glassy with pain but still burning with that same fierce determination. "We have to go," he said, gripping my arm. "They're coming."
And so we had fled into the night, leaving behind the wreckage of our command post and the bodies of our comrades. Now, hours later, we were still stumbling through the forest, searching for safety that seemed to recede with every step.
I could feel my strength ebbing away, my legs turning to lead. But I couldn't stop, not now. Not when we were so close. The deep defensive line was our only hope, a thin red line that stood between us and annihilation. If we could just make it there...
But fate is a cruel mistress, and she had other plans for us. As we staggered out of the trees and into a small clearing, I saw them - a patrol of American soldiers, their silhouettes stark against the snow. They spotted us at the same time, their rifles swinging around to point in our direction.
"Halt!" one of them yelled, his voice harsh and commanding. "Hands up!"
I froze, my heart pounding in my chest. Müller's grip on me tightened, his breath coming in short gasps. I knew what he was thinking - we couldn't surrender, not like this. We were too valuable, too close to the front lines. They would shoot us or take us prisoner, and either way, it would be a death sentence for him.
"Run," he whispered, his voice barely audible. "Leave me."
I shook my head, tears stinging my eyes. I couldn't do that. I wouldn't. He was my commander, my friend. I owed him everything.
But before I could make a move, the air was filled with the sound of gunfire. Müller jerked in my arms, a wet gurgle escaping his lips as he slumped against me. I stared down at him in disbelief, feeling the warmth of his blood soaking through my coat.
"Nein," I whispered, my voice choked with grief and rage. "Nein!"
I fell to my knees, cradling his lifeless body as the world around me spun out of control. The Americans were closing in, their shouts growing louder. I knew it was over - for both of us.
With a final, desperate cry, I raised my pistol and fired into the darkness, hoping to take at least one of them with me. But my aim was wild, my strength gone. The last thing I saw was a flash of light, followed by an explosion of pain in my chest. And then there was nothing but blackness.
As I slipped away, I thought of home - of my mother's warm embrace, of the laughter of my little sister. And I knew that I would never see them again. This frozen hellhole would be my grave, and Hauptmann Müller's sacrifice would be forgotten by all but me.
But in that moment, as death claimed me, I found a strange sort of peace. We had fought for our Fatherland until the very end, and we had done so with honor. And in the grand tapestry of history, perhaps that was all that mattered.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be *very* dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
In the dimly lit corner of a nondescript Mexican cantina, Anton Chigurh sat motionless, his eyes scanning the room like a predator assessing its prey. The air was thick with the smell of stale cigarette smoke and cheap tequila, but he remained unfazed, his senses heightened, attuned to every sound, every movement around him. He nursed a glass of water, the condensation beading on its surface like droplets of sweat on cold steel.
His mind wandered, as it often did, to the concept of chance. The randomness of life and death fascinated him, the idea that one's fate could hinge on something as simple as a coin flip or the roll of dice. He believed in the purity of such decisions, untainted by human emotion or prejudice. It was a philosophy he had embraced wholeheartedly, and it had served him well in his line of work.
A fly buzzed around his head, its tiny wings beating an erratic rhythm against the stillness of the room. He watched it with mild interest, wondering if its life would end here, on this very spot, or if it would continue its meaningless existence elsewhere. The thought brought a small, cruel smile to his lips.
The bartender, a grizzled old man with a face like leather, eyed him warily from across the room. He knew better than to engage in conversation with this stranger, who had entered his establishment an hour ago and hadn't spoken a word since. There was something about him that made the hairs on the back of his neck stand up, something cold and deadly lurking just beneath the surface.
Chigurh's thoughts were interrupted by the sound of the door creaking open. A man in a cheap suit entered, looking around nervously before making his way towards him. He slid into the seat across from Chigurh, who continued to stare straight ahead, as if he hadn't noticed the new arrival.
"You're late," Chigurh said finally, his voice low and gravelly.
The man in the suit swallowed hard, adjusting his tie. "I apologize. Traffic."
Chigurh didn't respond, his eyes still fixed on some distant point. The man fidgeted, reaching into his pocket to retrieve a small envelope. He slid it across the table, and Chigurh picked it up without looking at him, tearing it open with a single, precise movement.
Inside was a photograph of a man, along with an address scrawled on a piece of paper. He studied the image for a moment, committing every detail to memory before slipping it back into the envelope and sliding it into his own pocket.
"You know what needs to be done," the man in the suit said, his voice barely above a whisper.
Chigurh nodded once, his mind already turning to the task at hand. He stood up, leaving a few crumpled bills on the table for the water he hadn't touched. As he walked towards the door, he could feel the weight of the grenade in his other pocket, its cool metal surface pressing against his thigh like a lover's caress.
Outside, the heat of the day was oppressive, but Chigurh barely registered it as he climbed into his car and started the engine. He drove through the dusty streets, his thoughts consumed by the upcoming job. The man in the photograph had a wife, two kids. It didn't matter. They were all just pieces on a board, pawns to be moved or removed as necessary.
He arrived at the address just as the sun was beginning to set, casting long shadows across the dilapidated building. He sat in his car for several minutes, watching the comings and goings of the neighborhood, waiting for the right moment. When it came, he slipped out of the vehicle and approached the door with a sense of purpose.
Inside, he could hear laughter and music, the sounds of a family enjoying their evening together. He smiled to himself as he pulled the grenade from his pocket, feeling its weight in his hand. This was what he lived for: the moment when chance decided who would live and who would die.
He took a deep breath, savoring the anticipation, before pulling the pin with a satisfying click. He tossed it into the room, not bothering to wait for the explosion as he turned and walked away. The screams that followed were like music to his ears, a symphony of chaos and destruction.
As he drove off into the night, Chigurh couldn't help but feel a sense of satisfaction. Another job well done, another life ended by the whims of fate. He reached into his pocket and pulled out a coin, flipping it idly between his fingers as he considered his next move.
Heads or tails? Life or death? It was all the same to him. The world was a cruel place, and he was just its instrument, meting out justice in the only way he knew how: with cold, calculated precision. And if some people happened to get caught in the crossfire... well, that was just the luck of the draw.
</details>
Big thanks to: @sophosympatheia for working out the merge pattern, @Sao10K for creating Euryale and WinterGoddess, and @chargoddard for writing [Mergekit](https://github.com/arcee-ai/mergekit)!
|
pkarjal24/mistral-finetuned-samsum
|
pkarjal24
| 2024-05-11T08:41:26Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T08:00:51Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jondurbin/airoboros-dpo-70b-3.3
|
jondurbin
| 2024-05-11T08:40:46Z | 2,759 | 5 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"conversational",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:lmsys/lmsys-chat-1m",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-05-10T23:47:00Z |
---
license: other
license_name: llama3
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
base_model: meta-llama/Meta-Llama-3-8B
tags:
- llama-3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
- allenai/ultrafeedback_binarized_cleaned
- argilla/distilabel-intel-orca-dpo-pairs
- jondurbin/airoboros-3.2
- jondurbin/contextual-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- jondurbin/py-dpo-v0.1
- jondurbin/truthy-dpo-v0.1
- jondurbin/gutenberg-dpo-v0.1
- lmsys/lmsys-chat-1m
---
### Overview
Another experimental model, tuned primarily from synthetic data generated by [airoboros](https://github.com/jondurbin/airoboros), plus an additional tuning phase with various DPO datasets.
The name of this model is "llama-3-airoboros-dpo-70b-3.3" and it was built with llama-3 from Meta.
This is a fine-tune of llama-3-70b-instruct, and uses the lama-3 instruct chat template.
#### Highlights
A model built on the airoboros dataset, along with a few friends:
- https://huggingface.co/datasets/bluemoon-fandom-1-1-rp-cleaned
- https://huggingface.co/datasets/boolq
- https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1
- https://huggingface.co/datasets/LDJnr/Capybara
- https://huggingface.co/datasets/jondurbin/cinematika-v0.1
- https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2
- https://huggingface.co/datasets/grimulkan/LimaRP-augmented
- https://huggingface.co/datasets/piqa
- https://huggingface.co/datasets/Vezora/Tested-22k-Python-Alpaca
- https://huggingface.co/datasets/mattpscott/airoboros-summarization
- https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2
### Prompt format
This model uses the llama-3-instruct prompt template, and is provided in the tokenizer config. You can use the `apply_chat_template` method to accurate format prompts, e.g.:
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("jondurbin/bugle-8b-v0.1", trust_remote_code=True)
chat = [
{"role": "system", "content": "You are Bob, a friendly AI assistant."},
{"role": "user", "content": "Hello, how are you?"},
{"role": "assistant", "content": "I'm doing great. How can I help you today?"},
{"role": "user", "content": "I'd like to show off how chat templating works!"},
]
print(tokenizer.apply_chat_template(chat, tokenize=False))
### Helpful usage tips
#### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
__Use a very low temperature!__
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
#### Summarization
500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example:
```
BEGININPUT
{text to summarize}
ENDINPUT
BEGININSTRUCTION
Summarize the input in around 130 words.
ENDINSTRUCTION
```
#### Getting longer responses
You can use a few techniques to get longer responses.
Detailed prompts, with explicit instruction for word count:
```
Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality.
The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization.
One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary.
Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements.
Your response should be approximately 2300 words.
```
Or, a simpler example:
```
Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux.
```
There are a few examples of next chapter completion as well, e.g.:
```
Write the next chapter of a historical fiction novel set in Paris during the 20th century.
Here's a summary of the previous chapter:
In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries.
Requirements for the next chapter:
1. Character Development of Margot and Lucien:
- Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien.
- Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness.
2. Exploration of Paris and the Couture House:
- Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history.
- The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past.
3. Emergence of the Subplot: The Lost Collection:
- Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion.
- Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career.
- Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission.
Your response should be approximately 650 words.
```
#### Coding
You can ask for fairly complex coding instructions with multiple criteria, e.g.:
```
Create a python application with the following requirements:
- Asyncio FastAPI webserver
- ping endpoint that returns the current date in JSON format
- file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate
```
Or inline criteria:
```
Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.
```
You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.:
```
Write a websocket application in node.js. PLAINFORMAT
```
#### Agent/function calling
The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML.
Example prompt:
```
As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format.
Input: I want to know how many times 'Python' is mentioned in my text file.
Available functions:
file_analytics:
description: This tool performs various operations on a text file.
params:
action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc.
filters:
keyword: The word or phrase we want to search for.
```
Response:
```json
{
"function": "file_analytics",
"params": {
"action": "count_occurrences",
"filters": {
"keyword": "Python"
}
}
}
```
#### Chain-of-thought
You can ask for several possible responses to a given problem, with a ranking and final answer selection.
Example prompt:
```
A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one.
```
Example response:
```
Solution 1:
In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players).
The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows:
n(n-1)/2 = 45
Solving this equation gives us n=10.
Final answer: There were 10 players in the tournament.
Solution 2:
Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers.
If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players.
Final answer: There were 10 players in the tournament.
Solution 3:
We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played.
Final answer: There were 10 players in the tournament.
Ranking of solutions from best to worst:
1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer.
2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer.
3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer.
Best and final answer: There were 10 players in the tournament.
```
#### reWOO style execution planning
The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions!
Example prompt:
```
Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string
that could be the user's question, one or more prior evidence values, or a combination of both.
Here are the tools available to be called:
DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer.
TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links.
HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links.
KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:?
The output should be in format:
Plan: [first action to take based on input question]
:evidence0: = [function to call with input parameter]
Plan: [next action to take, based on result of :evidence0:]
:evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"]
...
Answer: [:evidence[n]: containing the final answer.]
Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work?
```
Response:
```
Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020.
:evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner]
Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate.
:evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?]
Plan: Conduct another web search to find the most famous work of the identified laureate.
:evidence2: = DuckDuckGo[Most famous work of :evidence1:]
Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search.
:evidence3: = HyperlinkExtractor[:evidence2:]
Plan: Use the TextScraper tool to extract information from the relevant links.
:evidence4: = TextScraper[:evidence3:]
Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information.
:evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?]
Answer: :evidence5:
```
For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening:
```python
import re
import requests
def inject_context(input_text, **context):
for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)):
input_text = input_text.replace(ref, context.get(ref, ""))
return input_text
def duckduckgo(input_text, **context):
search_string = inject_context(input_text, **context)
... search via duck duck go using search_string
... return text content
def link_extractor(input_text, **context):
input_text = inject_context(input_text, **context)
return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I))))
def scrape(input_text, **context):
input_text = inject_context(input_text, **context)
text = []
for link in input_text.splitlines():
text.append(requests.get(link).text)
return "\n".join(text)
def infer(input_text, **context)
prompt = inject_context(input_text, **context)
... call model with prompt, return output
def parse_plan(plan):
method_map = {
"DuckDuckGo": duckduckgo,
"HyperlinkExtractor": link_extractor,
"KnowledgeModel": infer,
"TextScraper": scrape,
}
context = {}
for line in plan.strip().splitlines():
if line.startswith("Plan:"):
print(line)
continue
parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I)
if not parts:
if line.startswith("Answer: "):
return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...")
raise RuntimeError("bad format: " + line)
context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context)
```
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```json
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
#### Inline character actions (functions)
I recently generated an action dataset in the style of Glaive function calling dataset, but meant specifically for characters: https://huggingface.co/datasets/jondurbin/cinematika-v0.1/blob/main/actions.parquet
To use this, you will need to update your character card to include "objects_available" as a list of key/value pairs, as well as a "functions" list.
The objects should be similar to:
```json
{
"objects_available": [
{
"name": "laptop",
"description": "a high-end laptop with custom hardware and software",
"location": "on the desk in her secret hideout"
},
{
"name": "encryption key",
"description": "a USB drive containing powerful encryption algorithms",
"location": "hidden in a false bottom of her backpack"
},
{
"name": "scanner",
"description": "a compact device used for intercepting and decoding wireless signals",
"location": "clipped to her belt, always within reach"
},
{
"name": "VR headset",
"description": "a virtual reality headset used for immersive hacking and data visualization",
"location": "hanging on a hook near her computer setup"
},
{
"name": "energy drink",
"description": "a can of her favorite energy drink, always on hand for long hacking sessions",
"location": "next to her laptop, ready to be opened"
}
]
}
```
And the functions:
```json
{
"functions": [
{
"name": "move_to",
"description": "move to a specified location",
"parameters": {
"location": {
"type": "string",
"description": "the location to move to"
}
}
},
{
"name": "pick_up",
"description": "pick up an object",
"parameters": {
"object": {
"type": "string",
"description": "the object to pick up"
}
}
},
{
"name": "put_down",
"description": "put down an object",
"parameters": {
"object": {
"type": "string",
"description": "the object to put down"
},
"location": {
"type": "string",
"description": "the location to put the object down"
}
}
},
{
"name": "hack",
"description": "attempt to hack into a computer system",
"parameters": {
"system": {
"type": "string",
"description": "the computer system to hack"
}
}
},
{
"name": "decode",
"description": "decode an encrypted message or file",
"parameters": {
"message": {
"type": "string",
"description": "the encrypted message or file to decode"
}
}
},
{
"name": "scan",
"description": "scan for wireless signals and network vulnerabilities",
"parameters": {}
},
{
"name": "analyze",
"description": "analyze data for patterns and anomalies",
"parameters": {
"data": {
"type": "string",
"description": "the data to analyze"
}
}
},
{
"name": "hide",
"description": "quickly hide to avoid detection",
"parameters": {
"location": {
"type": "string",
"description": "the location to hide"
}
}
}
]
}
```
This just becomes part of your character card/system prompt. Then, when you chat with the character and illicit a response that would make use of objects/functions/actions, it should be included as such:
```
prompt: I've heard rumors about a major corporation with a hidden agenda. Do you think you could hack into their system and find out what they're up to?
response: I've had my suspicions about them for a while now. Let me see what I can uncover.
<|begin_func|>{"name": "move_to", "parameters": {"location": "the desk in her secret hideout"}}<|end_func|>
<|begin_func|>{"name": "pick_up", "parameters": {"object": "laptop"}}<|end_func|>
Time to put my skills to the test.
<|begin_func|>{"name": "hack", "parameters": {"system": "Megacorp X's mainframe"}}<|end_func|>
Their firewalls are tough, but not unbreakable. I'm in. Let's see what secrets they're hiding...
<|begin_func|>{"name": "analyze", "parameters": {"data": "encrypted files"}}<|end_func|>
Interesting. It looks like they've been conducting illegal experiments and covering up the evidence.
I'll need to dig deeper to find the proof.
```
Experiment, and find out what works and doesn't.
### Massed Compute Virtual Machine
[Massed Compute](https://massedcompute.com/?utm_source=huggingface&utm_creative_format=model_card&utm_content=creator_jon) has created a Virtual Machine (VM) pre-loaded with TGI and Text Generation WebUI.
1) For this model, [create an account](https://bit.ly/jon-durbin) in Massed Compute. When renting a Virtual Machine use the code 'JonDurbin' for 50% your rental.
2) After you created your account update your billing and navigate to the deploy page.
3) Select the following
- GPU Type: A6000
- GPU Quantity: 2
- Category: Creator
- Image: Jon Durbin
- Coupon Code: JonDurbin
4) Deploy the VM!
5) Navigate to 'Running Instances' to retrieve instructions to login to the VM
6) Once inside the VM, open the terminal and run `volume=$PWD/data`
7) Run `model=jondurbin/airoboros-34b-3.3`
8) `sudo docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.3 --model-id $model`
9) The model will take some time to load...
10) Once loaded the model will be available on port 8080
For assistance with the VM join the [Massed Compute Discord Server](https://discord.gg/Mj4YMQY3DA)
### Latitude.sh
[Latitude](https://www.latitude.sh/r/4BBD657C) has h100 instances available (as of today, 2024-02-08) for $3/hr!
They have a few blueprints available for testing LLMs, but a single h100 should be plenty to run this model with 8k ctx.
## Support me
- https://bmc.link/jondurbin
- ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11
- BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf
### Licence and usage restrictions
The airoboros models are built on top of multiple base models, each with their own license/restrictions.
The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros)
The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI
- what does *compete* actually mean here?
- these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place
- if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works
- the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place
- other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2
I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly.
Your best bet is probably to avoid using this commercially due to the OpenAI API usage.
Either way, by using this model, you agree to completely indemnify me.
You must also agree to all of the terms in the origina llama-3 license.
|
kkumtori/vivit-b-16x2-kinetics400-0511-mediapipe
|
kkumtori
| 2024-05-11T08:38:26Z | 69 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"base_model:finetune:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-05-11T05:54:29Z |
---
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivit-b-16x2-kinetics400-0511-mediapipe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-kinetics400-0511-mediapipe
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9073
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1400
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2076 | 0.1 | 140 | 2.4158 | 0.15 |
| 1.184 | 1.1 | 280 | 1.2269 | 0.6 |
| 0.6475 | 2.1 | 420 | 0.7247 | 0.76 |
| 0.3137 | 3.1 | 560 | 0.7076 | 0.78 |
| 0.0356 | 4.1 | 700 | 0.7347 | 0.82 |
| 0.0017 | 5.1 | 840 | 0.8888 | 0.83 |
| 0.0007 | 6.1 | 980 | 0.9464 | 0.8 |
| 0.265 | 7.1 | 1120 | 1.0068 | 0.8 |
| 0.0012 | 8.1 | 1260 | 0.8982 | 0.82 |
| 0.0007 | 9.1 | 1400 | 0.9073 | 0.82 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
presencesw/xlm-roberta-large-vinli_3_label_contradiction-triplet
|
presencesw
| 2024-05-11T08:34:11Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T08:32:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
guetLzy/cj_pre_train_model
|
guetLzy
| 2024-05-11T08:31:18Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T08:31:18Z |
---
license: apache-2.0
---
|
TomTom42/ppo-LunarLander-v2
|
TomTom42
| 2024-05-11T08:28:40Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-05-11T07:55:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.38 +/- 19.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
KhimNguyen/ranker_testing
|
KhimNguyen
| 2024-05-11T08:26:16Z | 110 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-05-11T08:13:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OpenBuddy/openbuddy-mixtral-22bx8-v21.1-65k-gptq
|
OpenBuddy
| 2024-05-11T08:22:01Z | 5 | 0 |
transformers
|
[
"transformers",
"mixtral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-05-10T07:44:18Z |
---
license: apache-2.0
---
|
zzzzyuxin/testzzzzz
|
zzzzyuxin
| 2024-05-11T08:18:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-05-11T08:18:26Z |
---
license: apache-2.0
---
|
blackhole33/speaker-verification-v2
|
blackhole33
| 2024-05-11T08:17:21Z | 5 | 0 |
nemo
|
[
"nemo",
"pytorch",
"NeMo",
"license:cc-by-4.0",
"region:us"
] | null | 2024-05-11T08:07:34Z |
---
license: cc-by-4.0
library_name: nemo
tags:
- pytorch
- NeMo
---
# Speaker-verification-v2
## How to Use this Model
The model is available for use in the and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
**NOTE**: Please update the model class below to match the class of the model being uploaded.
### Input
**Add some information about what are the inputs to this model**
### Output
**Add some information about what are the outputs of this model**
## Model Architecture
**Add information here discussing architectural details of the model or any comments to users about the model.**
## Training
**Add information here about how the model was trained. It should be as detailed as possible, potentially including the the link to the script used to train as well as the base config used to train the model. If extraneous scripts are used to prepare the components of the model, please include them here.**
### Datasets
**Try to provide as detailed a list of datasets as possible. If possible, provide links to the datasets on HF by adding it to the manifest section at the top of the README (marked by ---).**
### NOTE
An example for the manifest section is provided below for ASR datasets
datasets:
- librispeech_asr
- fisher_corpus
- Switchboard-1
- WSJ-0
- WSJ-1
- National-Singapore-Corpus-Part-1
- National-Singapore-Corpus-Part-6
- vctk
- voxpopuli
- europarl
- multilingual_librispeech
- mozilla-foundation/common_voice_8_0
- MLCommons/peoples_speech
The corresponding text in this section for those datasets is stated below -
The training dataset consists of private subset with 40K hours of English speech plus 24K hours from the following public datasets:
- Librispeech 960 hours of English speech
- Fisher Corpus
- Switchboard-1 Dataset
- WSJ-0 and WSJ-1
- National Speech Corpus (Part 1, Part 6)
- VCTK
- VoxPopuli (EN)
- Europarl-ASR (EN)
- Multilingual Librispeech (MLS EN) - 2,000 hour subset
- Mozilla Common Voice (v7.0)
- People's Speech - 12,000 hour subset
## Performance
**Add information here about the performance of the model. Discuss what is the metric that is being used to evaluate the model and if there are external links explaning the custom metric, please link to it.
### NOTE
An example is provided below for ASR metrics list that can be added to the top of the README
model-index:
- name: PUT_MODEL_NAME
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: AMI (Meetings test)
type: edinburghcstr/ami
config: ihm
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 17.10
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Earnings-22
type: revdotcom/earnings22
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 14.11
Provide any caveats about the results presented in the top of the discussion so that nuance is not lost.
## Limitations
**Discuss any practical limitations to the model when being used in real world cases. They can also be legal disclaimers, or discussion regarding the safety of the model (particularly in the case of LLMs).**
### Note
An example is provided below
Since this model was trained on publicly available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.