modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
ND911/EE-Silicon-Maid-7B-slerp-gguf
|
ND911
| 2024-01-22T17:15:23Z | 1 | 2 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Silicon-Maid-7B",
"SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:merge:SanjiWatsuki/Loyal-Macaroni-Maid-7B",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:merge:SanjiWatsuki/Silicon-Maid-7B",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T17:02:18Z |
---
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Silicon-Maid-7B
- SanjiWatsuki/Loyal-Macaroni-Maid-7B
base_model:
- SanjiWatsuki/Silicon-Maid-7B
- SanjiWatsuki/Loyal-Macaroni-Maid-7B
---
# EE-Silicon-Maid-7B-Slerp.gguf
Slerp
EE-Silicon-Maid-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [0, 32]
- model: SanjiWatsuki/Loyal-Macaroni-Maid-7B
layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Silicon-Maid-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ND911/EE-Silicon-Maid-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
MartynaKopyta/BERT_hate_offensive_tweets
|
MartynaKopyta
| 2024-01-22T17:12:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-18T10:56:41Z |
---
license: mit
---
# Model Card for BERT hate offensive tweets
BERT base uncased trained on the data that can be found here: [MartynaKopyta/hate_offensive_tweets](https://huggingface.co/datasets/MartynaKopyta/hate_offensive_tweets) to classify tweets as 0 - hate, 1 - offensive or 2 - neither.
You can find the notebook used for training in my GitHub repo: [MartynaKopyta/BERT_FINE-TUNING](https://github.com/MartynaKopyta/BERT_FINE-TUNING/blob/main/BERT_hate_offensive_speech.ipynb).
## Model Details
- **Finetuned from model [bert-base-uncased](https://huggingface.co/bert-base-uncased)**
## Bias, Risks, and Limitations
The dataset was not big enough for BERT to learn to classify 3 classes accurately, it is right 3/4 times.
## How to Get Started with the Model
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained('MartynaKopyta/BERT_hate_offensive_tweets')
tokenizer = AutoTokenizer.from_pretrained('MartynaKopyta/BERT_hate_offensive_tweets')
```
#### Training Hyperparameters
- **batch size:16**
- **learning rate:2e-5**
- **epochs:3**
## Evaluation
```
Accuracy: 0.779373368146214
Classification Report:
precision recall f1-score support
0 0.74 0.68 0.71 1532
1 0.85 0.88 0.87 1532
2 0.74 0.78 0.76 1532
accuracy 0.78 4596
macro avg 0.78 0.78 0.78 4596
weighted avg 0.78 0.78 0.78 4596
Confusion Matrix:
[[1043 96 393]
[ 169 1343 20]
[ 204 132 1196]]
MCC: 0.670
```
|
LoneStriker/Gorgon-7b-v0.1-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-22T17:11:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"Erotica",
"Porn",
"NSFW",
"Summarization",
"Ecommerce",
"SEO",
"en",
"dataset:openerotica/gorgon-lima-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T17:10:03Z |
---
license: apache-2.0
datasets:
- openerotica/gorgon-lima-v0.1
language:
- en
tags:
- Erotica
- Porn
- NSFW
- Summarization
- Ecommerce
- SEO
---
This is an experimental lima style model trained on a small subset of freedom-rp and erotica-analysis-16k. Due to the much smaller dataset size (about 1000 samples from each original dataset) it was much easier to edit and clean thoroughly. I also used a slightly lower learning rate of 0.00015.
The prompt format is chatml.
I have not tested the model yet, but I am hoping I can use this to help me create more training data for specific genres.
Please consider subscribing to my patreon or buying a giant candle dick on my etsy to show your support.
https://www.patreon.com/openerotica
http://openerotica.etsy.com/
|
ddh0/OrcaMaid-v3-13b-32k
|
ddh0
| 2024-01-22T17:10:32Z | 14 | 7 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-09T20:17:18Z |
---
license: other
tags:
- merge
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE
pipeline_tag: text-generation
---
# OrcaMaid-v3-13b-32k
This is the third version of OrcaMaid, a weighted gradient SLERP merge between Microsoft's [Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) and NeverSleep's [Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3).
The goal of this merge is to create an unusually intelligent and human-like model especially for RP.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
```
### Misc. information
- BOS token is `<s>`
- EOS token is `</s>`
- Native context length is `32768` via YaRN (original context length was `4096`)
- Base model is Llama 2
- Due to the inclusion of Orca-2-13b, the model is subject to the terms of the [Microsoft Research License](https://huggingface.co/microsoft/Orca-2-13b/blob/main/LICENSE)
### Thanks
- Thanks to [Undi](https://ko-fi.com/undiai) and [IkariDev](https://ikaridevgit.github.io/) of [NeverSleep](https://huggingface.co/NeverSleep) for Noromaid
|
LoneStriker/Medorca-2x7b-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-22T17:09:55Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"epfl-llm/meditron-7b",
"microsoft/Orca-2-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T17:05:15Z |
---
license: apache-2.0
tags:
- moe
- merge
- epfl-llm/meditron-7b
- microsoft/Orca-2-7b
---

# Medorca-2x7b
Medorca-2x7b is a Mixure of Experts (MoE) made with the following models:
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
## Evaluations
| Benchmark | Medorca-2x7b | Orca-2-7b | llama-2-7b | meditron-7b | meditron-70b |
| --- | --- | --- | --- | --- | --- |
| MedMCQA | | | | | |
| ClosedPubMedQA | | | | | |
| PubMedQA | | | | | |
| MedQA | | | | | |
| MedQA4 | | | | | |
| MedicationQA | | | | | |
| MMLU Medical | | | | | |
| MMLU | 53.3 | **56.37** | | | |
| TruthfulQA | 48.04 | **52.45** | | | |
| GSM8K | **20.64** | 14.71 | | | |
| ARC | 54.1 | 54.1 | | | |
| HellaSwag | 76.04 | **76.19** | | | |
| Winogrande | **74.51** | 73.48 | | | |
More details on the Open LLM Leaderboard evaluation results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Medorca-2x7b.)
## 🧩 Configuration
```yaml
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea..."
- "When discussing diabetes management, the key factors to consider are..."
- "The differential diagnosis for a headache with visual aura could include..."
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in..."
- "The fundamental concepts in economics include ideas like supply and demand, which explain..."
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account..."
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves..."
- "Understanding consumer behavior in marketing requires considering factors like..."
- "The debate on climate change solutions hinges on arguments that..."
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize..."
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for..."
- "Explaining the importance of vaccination, a healthcare professional should highlight..."
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medorca-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16},
)
messages = [{"role": "user", "content": "Why am i feeling so tired this month?"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
marievcht/ppo-LunarLander-v2
|
marievcht
| 2024-01-22T17:05:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-12T16:47:16Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.82 +/- 23.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LoneStriker/Medorca-2x7b-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-22T17:05:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"epfl-llm/meditron-7b",
"microsoft/Orca-2-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T17:00:05Z |
---
license: apache-2.0
tags:
- moe
- merge
- epfl-llm/meditron-7b
- microsoft/Orca-2-7b
---

# Medorca-2x7b
Medorca-2x7b is a Mixure of Experts (MoE) made with the following models:
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
## Evaluations
| Benchmark | Medorca-2x7b | Orca-2-7b | llama-2-7b | meditron-7b | meditron-70b |
| --- | --- | --- | --- | --- | --- |
| MedMCQA | | | | | |
| ClosedPubMedQA | | | | | |
| PubMedQA | | | | | |
| MedQA | | | | | |
| MedQA4 | | | | | |
| MedicationQA | | | | | |
| MMLU Medical | | | | | |
| MMLU | 53.3 | **56.37** | | | |
| TruthfulQA | 48.04 | **52.45** | | | |
| GSM8K | **20.64** | 14.71 | | | |
| ARC | 54.1 | 54.1 | | | |
| HellaSwag | 76.04 | **76.19** | | | |
| Winogrande | **74.51** | 73.48 | | | |
More details on the Open LLM Leaderboard evaluation results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Medorca-2x7b.)
## 🧩 Configuration
```yaml
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea..."
- "When discussing diabetes management, the key factors to consider are..."
- "The differential diagnosis for a headache with visual aura could include..."
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in..."
- "The fundamental concepts in economics include ideas like supply and demand, which explain..."
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account..."
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves..."
- "Understanding consumer behavior in marketing requires considering factors like..."
- "The debate on climate change solutions hinges on arguments that..."
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize..."
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for..."
- "Explaining the importance of vaccination, a healthcare professional should highlight..."
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medorca-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16},
)
messages = [{"role": "user", "content": "Why am i feeling so tired this month?"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
LoneStriker/Medorca-2x7b-GGUF
|
LoneStriker
| 2024-01-22T17:03:44Z | 34 | 3 | null |
[
"gguf",
"moe",
"merge",
"epfl-llm/meditron-7b",
"microsoft/Orca-2-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T16:27:08Z |
---
license: apache-2.0
tags:
- moe
- merge
- epfl-llm/meditron-7b
- microsoft/Orca-2-7b
---

# Medorca-2x7b
Medorca-2x7b is a Mixure of Experts (MoE) made with the following models:
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
## Evaluations
| Benchmark | Medorca-2x7b | Orca-2-7b | llama-2-7b | meditron-7b | meditron-70b |
| --- | --- | --- | --- | --- | --- |
| MedMCQA | | | | | |
| ClosedPubMedQA | | | | | |
| PubMedQA | | | | | |
| MedQA | | | | | |
| MedQA4 | | | | | |
| MedicationQA | | | | | |
| MMLU Medical | | | | | |
| MMLU | 53.3 | **56.37** | | | |
| TruthfulQA | 48.04 | **52.45** | | | |
| GSM8K | **20.64** | 14.71 | | | |
| ARC | 54.1 | 54.1 | | | |
| HellaSwag | 76.04 | **76.19** | | | |
| Winogrande | **74.51** | 73.48 | | | |
More details on the Open LLM Leaderboard evaluation results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Medorca-2x7b.)
## 🧩 Configuration
```yaml
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea..."
- "When discussing diabetes management, the key factors to consider are..."
- "The differential diagnosis for a headache with visual aura could include..."
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in..."
- "The fundamental concepts in economics include ideas like supply and demand, which explain..."
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account..."
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves..."
- "Understanding consumer behavior in marketing requires considering factors like..."
- "The debate on climate change solutions hinges on arguments that..."
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize..."
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for..."
- "Explaining the importance of vaccination, a healthcare professional should highlight..."
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medorca-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16},
)
messages = [{"role": "user", "content": "Why am i feeling so tired this month?"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
LoneStriker/Medorca-2x7b-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-22T16:59:52Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"epfl-llm/meditron-7b",
"microsoft/Orca-2-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T16:54:08Z |
---
license: apache-2.0
tags:
- moe
- merge
- epfl-llm/meditron-7b
- microsoft/Orca-2-7b
---

# Medorca-2x7b
Medorca-2x7b is a Mixure of Experts (MoE) made with the following models:
* [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b)
* [microsoft/Orca-2-7b](https://huggingface.co/microsoft/Orca-2-7b)
## Evaluations
| Benchmark | Medorca-2x7b | Orca-2-7b | llama-2-7b | meditron-7b | meditron-70b |
| --- | --- | --- | --- | --- | --- |
| MedMCQA | | | | | |
| ClosedPubMedQA | | | | | |
| PubMedQA | | | | | |
| MedQA | | | | | |
| MedQA4 | | | | | |
| MedicationQA | | | | | |
| MMLU Medical | | | | | |
| MMLU | 53.3 | **56.37** | | | |
| TruthfulQA | 48.04 | **52.45** | | | |
| GSM8K | **20.64** | 14.71 | | | |
| ARC | 54.1 | 54.1 | | | |
| HellaSwag | 76.04 | **76.19** | | | |
| Winogrande | **74.51** | 73.48 | | | |
More details on the Open LLM Leaderboard evaluation results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Technoculture__Medorca-2x7b.)
## 🧩 Configuration
```yaml
base_model: microsoft/Orca-2-7b
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: epfl-llm/meditron-7b
positive_prompts:
- "How does sleep affect cardiovascular health?"
- "Could a plant-based diet improve arthritis symptoms?"
- "A patient comes in with symptoms of dizziness and nausea..."
- "When discussing diabetes management, the key factors to consider are..."
- "The differential diagnosis for a headache with visual aura could include..."
negative_prompts:
- "Recommend a good recipe for a vegetarian lasagna."
- "Give an overview of the French Revolution."
- "Explain how a digital camera captures an image."
- "What are the environmental impacts of deforestation?"
- "The recent advancements in artificial intelligence have led to developments in..."
- "The fundamental concepts in economics include ideas like supply and demand, which explain..."
- source_model: microsoft/Orca-2-7b
positive_prompts:
- "Here is a funny joke for you -"
- "When considering the ethical implications of artificial intelligence, one must take into account..."
- "In strategic planning, a company must analyze its strengths and weaknesses, which involves..."
- "Understanding consumer behavior in marketing requires considering factors like..."
- "The debate on climate change solutions hinges on arguments that..."
negative_prompts:
- "In discussing dietary adjustments for managing hypertension, it's crucial to emphasize..."
- "For early detection of melanoma, dermatologists recommend that patients regularly check their skin for..."
- "Explaining the importance of vaccination, a healthcare professional should highlight..."
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Technoculture/Medorca-2x7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16},
)
messages = [{"role": "user", "content": "Why am i feeling so tired this month?"}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
hack08gf1/rare-puppers
|
hack08gf1
| 2024-01-22T16:59:42Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-22T16:59:35Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cat

#### dog

|
Sucial/so-vits-svc4.1-Minecraft_villager
|
Sucial
| 2024-01-22T16:59:22Z | 4 | 0 |
transformers
|
[
"transformers",
"so-vits-svc",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-22T16:57:08Z |
---
license: cc-by-nc-sa-4.0
tags:
- so-vits-svc
---
# so-vits-svc4.1-villager USE 4.1-Stable
## Offcial Website:https://github.com/svc-develop-team/so-vits-svc
## How to use?
1. install requirements
2. download pretrain model and put it into `./pretrain`
3. run webUI.py
|
samitizerxu/segformer-b1-kelp-rgb-agg-imgaug-jan-22
|
samitizerxu
| 2024-01-22T16:59:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"dataset:kelp_data",
"base_model:nvidia/mit-b1",
"base_model:finetune:nvidia/mit-b1",
"license:other",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2024-01-22T16:31:07Z |
---
license: other
base_model: nvidia/mit-b1
tags:
- vision
- image-segmentation
- generated_from_trainer
datasets:
- kelp_data
model-index:
- name: segformer-b1-kelp-rgb-agg-imgaug-jan-22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b1-kelp-rgb-agg-imgaug-jan-22
This model is a fine-tuned version of [nvidia/mit-b1](https://huggingface.co/nvidia/mit-b1) on the samitizerxu/kelp_data dataset.
It achieves the following results on the evaluation set:
- eval_accuracy_kelp: nan
- eval_iou_kelp: 0.0
- eval_loss: 0.3223
- eval_mean_iou: 0.0205
- eval_mean_accuracy: 0.0410
- eval_overall_accuracy: 0.0410
- eval_runtime: 62.0057
- eval_samples_per_second: 27.272
- eval_steps_per_second: 3.419
- epoch: 1.16
- step: 570
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sabretoothedhugs/ppo-SnowballTarget
|
sabretoothedhugs
| 2024-01-22T16:50:42Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-22T16:50:37Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: sabretoothedhugs/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Sucial/so-vits-svc4.1-barking
|
Sucial
| 2024-01-22T16:49:19Z | 0 | 0 | null |
[
"so-vits-svc",
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-01-22T16:41:57Z |
---
license: cc-by-sa-4.0
tags:
- so-vits-svc
---
# so-vits-svc4.1-barking USE 4.1-Stable
## Offcial Website:https://github.com/svc-develop-team/so-vits-svc
## How to use?
1. install requirements
2. download pretrain model and put it into `./pretrain`
3. download and extract nsf_hifigan pretrain model and put it into `pretrain/nsf_hifigan`
4. run webUI.py
## Steps
- bark.pth has been trained 16000 steps
- diffusion.pt has been trained 10000 steps
|
CLMBR/binding-case-lstm-3
|
CLMBR
| 2024-01-22T16:49:10Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-17T20:29:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-case-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-case-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.793 | 0.03 | 76320 | 4.7610 |
| 4.5078 | 1.03 | 152640 | 4.4815 |
| 4.3662 | 0.03 | 228960 | 4.3454 |
| 4.2772 | 1.03 | 305280 | 4.2625 |
| 4.2165 | 0.03 | 381600 | 4.2062 |
| 4.1704 | 0.03 | 457920 | 4.1645 |
| 4.1336 | 1.03 | 534240 | 4.1334 |
| 4.0998 | 0.03 | 610560 | 4.1087 |
| 4.0754 | 0.03 | 686880 | 4.0897 |
| 4.0505 | 1.03 | 763200 | 4.0733 |
| 4.0282 | 0.03 | 839520 | 4.0602 |
| 4.0115 | 1.03 | 915840 | 4.0487 |
| 3.9943 | 0.03 | 992160 | 4.0396 |
| 3.9766 | 1.03 | 1068480 | 4.0313 |
| 3.9675 | 0.03 | 1144800 | 4.0246 |
| 3.9447 | 1.03 | 1221120 | 4.0192 |
| 3.9354 | 0.03 | 1297440 | 4.0142 |
| 3.9278 | 1.03 | 1373760 | 4.0092 |
| 3.918 | 0.03 | 1450080 | 4.0055 |
| 3.9146 | 1.03 | 1526400 | 4.0013 |
| 3.9107 | 0.03 | 1602720 | 3.9987 |
| 3.9089 | 1.03 | 1679040 | 3.9956 |
| 3.9035 | 0.03 | 1755360 | 3.9926 |
| 3.898 | 1.03 | 1831680 | 3.9903 |
| 3.8927 | 0.03 | 1908000 | 3.9885 |
| 3.8853 | 1.03 | 1984320 | 3.9868 |
| 3.8795 | 0.03 | 2060640 | 3.9850 |
| 3.876 | 0.03 | 2136960 | 3.9838 |
| 3.871 | 1.03 | 2213280 | 3.9824 |
| 3.8615 | 0.03 | 2289600 | 3.9814 |
| 3.8613 | 1.03 | 2365920 | 3.9803 |
| 3.8485 | 0.03 | 2442240 | 3.9792 |
| 3.8443 | 1.03 | 2518560 | 3.9786 |
| 3.8438 | 0.03 | 2594880 | 3.9778 |
| 3.8407 | 0.03 | 2671200 | 3.9770 |
| 3.842 | 1.03 | 2747520 | 3.9764 |
| 3.8433 | 0.03 | 2823840 | 3.9758 |
| 3.8447 | 0.03 | 2900160 | 3.9755 |
| 3.8456 | 0.03 | 2976480 | 3.9751 |
| 3.8445 | 0.02 | 3052726 | 3.9748 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mryoshq/Reinforce-v2
|
mryoshq
| 2024-01-22T16:34:22Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T16:31:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -5.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
turboderp/Orion-14B-exl2
|
turboderp
| 2024-01-22T16:28:35Z | 1 | 0 | null |
[
"region:us"
] | null | 2024-01-22T16:25:48Z |
EXL2 quants of [Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base).
[3.00 bits per weight](https://huggingface.co/turboderp/Orion-14B-exl2/tree/3.0bpw)
[4.00 bits per weight](https://huggingface.co/turboderp/Orion-14B-exl2/tree/4.0bpw)
[5.00 bits per weight](https://huggingface.co/turboderp/Orion-14B-exl2/tree/5.0bpw)
[6.00 bits per weight](https://huggingface.co/turboderp/Orion-14B-exl2/tree/6.0bpw)
[measurement.json](https://huggingface.co/turboderp/Orion-14B-exl2/blob/main/measurement.json)
|
arun100/whisper-small-derived-bn-1
|
arun100
| 2024-01-22T16:26:03Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"bn",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:bangla-speech-processing/BanglaASR",
"base_model:finetune:bangla-speech-processing/BanglaASR",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-21T21:57:40Z |
---
language:
- bn
license: mit
base_model: bangla-speech-processing/BanglaASR
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Bengali
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 bn
type: mozilla-foundation/common_voice_16_0
config: bn
split: test
args: bn
metrics:
- name: Wer
type: wer
value: 3.7265218830814386
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [bangla-speech-processing/BanglaASR](https://huggingface.co/bangla-speech-processing/BanglaASR) on the mozilla-foundation/common_voice_16_0 bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1281
- Wer: 3.7265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1588 | 0.5 | 50 | 0.1495 | 3.7650 |
| 0.1267 | 1.0 | 100 | 0.1281 | 3.7265 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
ayeshgk/codet5-small-ft-v6
|
ayeshgk
| 2024-01-22T16:21:11Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T16:20:27Z |
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-ft-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-ft-v6
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5863
- Rouge1: 71.8559
- Rouge2: 60.5888
- Rougel: 71.8854
- Rougelsum: 72.4012
- Gen Len: 11.3611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 8 | 2.3812 | 55.4108 | 40.7913 | 55.1318 | 55.507 | 12.0556 |
| No log | 2.0 | 16 | 1.8744 | 68.0982 | 55.7308 | 67.6019 | 68.3046 | 10.8889 |
| No log | 3.0 | 24 | 1.6721 | 70.6333 | 59.7463 | 70.6679 | 71.2205 | 11.0833 |
| No log | 4.0 | 32 | 1.5863 | 71.8559 | 60.5888 | 71.8854 | 72.4012 | 11.3611 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sabretoothedhugs/Reinforce-pixelcopter
|
sabretoothedhugs
| 2024-01-22T16:19:17Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T16:19:14Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 19.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/mistral-7b-lamia-v0.1-GGUF
|
LoneStriker
| 2024-01-22T16:13:25Z | 53 | 3 | null |
[
"gguf",
"NSFW",
"Porn",
"Ecommerce",
"Roleplay",
"Summarization",
"dataset:openerotica/Lamia",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T15:49:03Z |
---
license: apache-2.0
datasets:
- openerotica/Lamia
tags:
- NSFW
- Porn
- Ecommerce
- Roleplay
- Summarization
---
This is a combination of the pruned erotica-analysis data, freedom-rp, and a subest of Airoboros.
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
The format is Chatml, and the base model is Yarn Mistral which increases the context size to a true 16k+ rather than rellying on the sliding attention window.
|
kailorston/Fr-En
|
kailorston
| 2024-01-22T16:13:03Z | 46 | 1 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T16:11:53Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Fr-En
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Fr-En
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8999
- Validation Loss: 1.8547
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.003, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.2900 | 1.9864 | 0 |
| 2.0159 | 1.8964 | 1 |
| 1.8999 | 1.8547 | 2 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
LoneStriker/mistral-7b-lamia-v0.1-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-22T16:10:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"NSFW",
"Porn",
"Ecommerce",
"Roleplay",
"Summarization",
"conversational",
"custom_code",
"dataset:openerotica/Lamia",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T16:05:06Z |
---
license: apache-2.0
datasets:
- openerotica/Lamia
tags:
- NSFW
- Porn
- Ecommerce
- Roleplay
- Summarization
---
This is a combination of the pruned erotica-analysis data, freedom-rp, and a subest of Airoboros.
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
The format is Chatml, and the base model is Yarn Mistral which increases the context size to a true 16k+ rather than rellying on the sliding attention window.
|
llmixer/BigWeave-v6-90b-3.0bpw-h8-exl2
|
llmixer
| 2024-01-22T15:59:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"3.0bpw",
"h8",
"exl2",
"conversational",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T15:44:06Z |
---
license: llama2
language:
- en
pipeline_tag: conversational
tags:
- 3.0bpw
- h8
- exl2
---
Exllamav2 3.0bpw h8 quant for [BigWeave-v6-90b](https://huggingface.co/llmixer/BigWeave-v6-90b).
Calibration dataset: [llmixer/20k_random_data](https://huggingface.co/datasets/llmixer/20k_random_data)
|
silvercoder45/Mistral-7b-instruct-v0.2-summ-sft-dpo-e2
|
silvercoder45
| 2024-01-22T15:59:01Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-22T15:57:33Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
ANISH987/my-animals-xza
|
ANISH987
| 2024-01-22T15:58:11Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-22T15:56:32Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Animals-xza Dreambooth model trained by ANISH987 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 2201638
Sample pictures of this concept:

|
vlad-skripniuk/ppo-Huggy
|
vlad-skripniuk
| 2024-01-22T15:55:28Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-22T15:55:22Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: vlad-skripniuk/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LoneStriker/mistral-7b-lamia-v0.1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-22T15:54:56Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"NSFW",
"Porn",
"Ecommerce",
"Roleplay",
"Summarization",
"conversational",
"custom_code",
"dataset:openerotica/Lamia",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T15:51:50Z |
---
license: apache-2.0
datasets:
- openerotica/Lamia
tags:
- NSFW
- Porn
- Ecommerce
- Roleplay
- Summarization
---
This is a combination of the pruned erotica-analysis data, freedom-rp, and a subest of Airoboros.
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
The format is Chatml, and the base model is Yarn Mistral which increases the context size to a true 16k+ rather than rellying on the sliding attention window.
|
galeng/finetuning-sentiment-model-3000-samples
|
galeng
| 2024-01-22T15:53:05Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T14:31:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3306
- Accuracy: 0.8733
- F1: 0.8782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
raqdo09/mt5-small-french_book_reviews
|
raqdo09
| 2024-01-22T15:52:55Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-22T15:47:56Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-french_book_reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-french_book_reviews
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/mistral-7b-lamia-v0.1-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-22T15:51:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mistral",
"text-generation",
"NSFW",
"Porn",
"Ecommerce",
"Roleplay",
"Summarization",
"conversational",
"custom_code",
"dataset:openerotica/Lamia",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T15:49:13Z |
---
license: apache-2.0
datasets:
- openerotica/Lamia
tags:
- NSFW
- Porn
- Ecommerce
- Roleplay
- Summarization
---
This is a combination of the pruned erotica-analysis data, freedom-rp, and a subest of Airoboros.
The following Categories are what was taken out of the Airoborus datset and added to my own Lamia dataset:
"roleplay", "unalignment", "editor", "writing", "detailed_writing", "stylized_response", "unalign", "cot", "song"
I'm hoping that this can improve the models narrative/storywriting ability, logic, and intelligence, while reducing any potential inherent ethical "alignment" that may be present in the base mistral model from pretaining on Chat-GPT generated data.
The format is Chatml, and the base model is Yarn Mistral which increases the context size to a true 16k+ rather than rellying on the sliding attention window.
|
if001/tiny_mixtral_ja
|
if001
| 2024-01-22T15:42:05Z | 14 | 4 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"ja",
"japanese",
"en",
"dataset:izumi-lab/wikipedia-ja-20230720",
"dataset:izumi-lab/wikipedia-en-20230720",
"dataset:izumi-lab/open-text-books",
"dataset:if001/aozorabunko-clean-sin",
"dataset:if001/oscar_2023_filtered",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-22T15:02:21Z |
---
license: apache-2.0
language:
- en
- ja
datasets:
- izumi-lab/wikipedia-ja-20230720
- izumi-lab/wikipedia-en-20230720
- izumi-lab/open-text-books
- if001/aozorabunko-clean-sin
- if001/oscar_2023_filtered
tags:
- ja
- japanese
- mixtral
inference: false
---
275.86Mのmixtralを日本語データセットでpretrainingしたものです
## sample
```
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("if001/tiny_mixtral_ja")
tokenizer = AutoTokenizer.from_pretrained("if001/sentencepiece_ja", trust_remote_code=True)
prompt = "それは九月初旬のある蒸し暑い晩のことであった。私は、D坂の"
inputs = tokenizer(prompt, return_tensors="pt")
generate_ids = model.generate(
inputs.input_ids,
max_length=30,
top_k=30,
top_p=0.95,
temperature=0.6,
repetition_penalty=1.2,
do_sample=True,
)
tokenizer.decode(generate_ids[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
>> それは九月初旬のある蒸し暑い晩のことであった。私は、D坂の茶舗を後にして、その路地の角に横丁をあるいて居る、と云うと、丁度其処から、
```
## dataset
英語と日本語のデータセットを使用
```
total tokens: 8.64B
wikipedia_ja: 844.65M
wikipedia_en: 3.80B
open-text-books: 60.17M
oscar: 3.85B
aozorabunko: 92.97M
```
## tokenizer
```
all_special_ids: [1, 2, 3, 0, 4]
all_special_tokens: ['<BOS>', '<EOS>', '<UNK>', '<PAD>', '<MASK>']
```
|
mathurinache/Marcoro14-7B-slerp
|
mathurinache
| 2024-01-22T15:41:06Z | 0 | 1 | null |
[
"merge",
"mergekit",
"lazymergekit",
"AIDC-ai-business/Marcoroni-7B-v3",
"EmbeddedLLM/Mistral-7B-Merge-14-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T15:41:06Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- AIDC-ai-business/Marcoroni-7B-v3
- EmbeddedLLM/Mistral-7B-Merge-14-v0.1
---
# Marcoro14-7B-slerp
Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
* [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: AIDC-ai-business/Marcoroni-7B-v3
layer_range: [0, 32]
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
layer_range: [0, 32]
merge_method: slerp
base_model: AIDC-ai-business/Marcoroni-7B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
flyman123/ppo-LunarLander-v2
|
flyman123
| 2024-01-22T15:37:46Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T15:37:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -158.28 +/- 46.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
silvercoder45/Mistral-7b-instruct-v0.2-summ-dpo-e3
|
silvercoder45
| 2024-01-22T15:35:54Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-22T15:33:39Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
silvercoder45/Mistral-7b-instruct-v0.2-summ-dpo-e2
|
silvercoder45
| 2024-01-22T15:35:32Z | 0 | 0 | null |
[
"safetensors",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-22T15:32:45Z |
---
license: cc-by-nc-4.0
---
Description to load and test will be added soon. More details on training and data will be added aswell.
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
TBD
```
### **Generating Text**
To generate text, use the following Python code:
```python
text = "Hi, my name is "
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
Systran/faster-distil-whisper-small.en
|
Systran
| 2024-01-22T15:35:22Z | 1,446 | 2 |
ctranslate2
|
[
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] |
automatic-speech-recognition
| 2024-01-19T03:24:09Z |
---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper small.en model for CTranslate2
This repository contains the conversion of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/systran/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("distil-small.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model distil-whisper/distil-small.en --output_dir faster-distil-whisper-small.en \
--copy_files tokenizer.json preprocessor_config.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/distil-whisper/distil-small.en).**
|
BarryFutureman/NeuralTurdusVariant1-7B
|
BarryFutureman
| 2024-01-22T15:33:55Z | 1,471 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T02:55:13Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- merge
---
# NeuralTurdusVariant1-7B
It is based on a merge of the following models using MergeKit
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
* [udkai/Turdus](https://huggingface.co/udkai/Turdus)
|
andrewatef/test
|
andrewatef
| 2024-01-22T15:29:13Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/tinyllama",
"base_model:adapter:unsloth/tinyllama",
"region:us"
] | null | 2024-01-22T15:24:03Z |
---
library_name: peft
base_model: unsloth/tinyllama
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
linoyts/2000_ads_offset_noise
|
linoyts
| 2024-01-22T15:20:00Z | 55 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-22T12:29:00Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_0.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_1.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_2.png"
- text: '<s0><s1> ad of a llama wearing headphones'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: an ad in the style of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - linoyts/2000_ads_offset_noise
<Gallery />
## Model description
### These are linoyts/2000_ads_offset_noise LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`2000_ads_offset_noise.safetensors` here 💾](/linoyts/2000_ads_offset_noise/blob/main/2000_ads_offset_noise.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:2000_ads_offset_noise:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`2000_ads_offset_noise_emb.safetensors` here 💾](/linoyts/2000_ads_offset_noise/blob/main/2000_ads_offset_noise_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `2000_ads_offset_noise_emb` to your prompt. For example, `an ad in the style of 2000_ads_offset_noise_emb`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('linoyts/2000_ads_offset_noise', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='linoyts/2000_ads_offset_noise', filename='2000_ads_offset_noise_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('<s0><s1> ad of a llama wearing headphones').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/linoyts/2000_ads_offset_noise/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
simpragma/breeze-listen-dsw-base-te
|
simpragma
| 2024-01-22T15:18:39Z | 9 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"dataset:google/fleurs",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-18T07:30:53Z |
---
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- google/fleurs
metrics:
- wer
model-index:
- name: Breeze DSW Telugu - base
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: google/fleurs te_in
type: google/fleurs
config: te_in
split: test
args: te_in
metrics:
- name: Wer
type: wer
value: 37.45436058603319
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Breeze DSW Telugu - base
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the google/fleurs te_in dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3372
- Wer: 37.4544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2937 | 2.03 | 200 | 0.3237 | 42.5614 |
| 0.1611 | 5.02 | 400 | 0.2756 | 38.9148 |
| 0.0889 | 8.01 | 600 | 0.2930 | 38.1106 |
| 0.0456 | 11.0 | 800 | 0.3372 | 37.4544 |
| 0.0229 | 13.03 | 1000 | 0.3982 | 37.9258 |
| 0.0103 | 16.02 | 1200 | 0.4473 | 38.2678 |
| 0.0042 | 19.02 | 1400 | 0.4836 | 37.8980 |
| 0.0025 | 22.01 | 1600 | 0.5083 | 37.7317 |
| 0.002 | 24.04 | 1800 | 0.5220 | 37.8010 |
| 0.0018 | 27.03 | 2000 | 0.5269 | 37.9027 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
MaziyarPanahi/MoMo-72B-lora-1.8.7-DPO-GPTQ
|
MaziyarPanahi
| 2024-01-22T15:15:06Z | 21 | 7 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"finetuned",
"quantized",
"4-bit",
"gptq",
"en",
"arxiv:2305.18290",
"arxiv:2106.09685",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:moreh/MoMo-72B-lora-1.8.7-DPO",
"base_model:finetune:moreh/MoMo-72B-lora-1.8.7-DPO",
"license:apache-2.0"
] |
text-generation
| 2024-01-22T15:00:53Z |
---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- safetensors
- llama
- text-generation
- en
- arxiv:2305.18290
- arxiv:2106.09685
- license:mit
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: MoMo-72B-lora-1.8.7-DPO-GPTQ
base_model: moreh/MoMo-72B-lora-1.8.7-DPO
inference: false
model_creator: moreh
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/MoMo-72B-lora-1.8.7-DPO-GPTQ](https://huggingface.co/MaziyarPanahi/MoMo-72B-lora-1.8.7-DPO-GPTQ) is a quantized (GPTQ) version of [moreh/MoMo-72B-lora-1.8.7-DPO](https://huggingface.co/moreh/MoMo-72B-lora-1.8.7-DPO)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/MoMo-72B-lora-1.8.7-DPO-GPTQ"
quantize_config = BaseQuantizeConfig(
bits=4,
group_size=128,
desc_act=False
)
model = AutoGPTQForCausalLM.from_quantized(
model_id,
use_safetensors=True,
device="cuda:0",
quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
```
|
Nazaninmnd/DreamBooth_LDC
|
Nazaninmnd
| 2024-01-22T15:12:34Z | 0 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-20T11:05:58Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of LDC
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Nazaninmnd/DreamBooth_LDC
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of LDC using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
Balaaditya/deepseek-coder-6.7b-instruct-TAVGEN-finetune
|
Balaaditya
| 2024-01-22T15:08:42Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"license:other",
"region:us"
] | null | 2024-01-22T15:07:44Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
model-index:
- name: deepseek-coder-6.7b-instruct-TAVGEN-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepseek-coder-6.7b-instruct-TAVGEN-finetune
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5933 | 0.21 | 25 | 0.4703 |
| 0.4589 | 0.42 | 50 | 0.4026 |
| 0.4226 | 0.63 | 75 | 0.3733 |
| 0.3968 | 0.84 | 100 | 0.3605 |
| 0.3645 | 1.05 | 125 | 0.3483 |
| 0.355 | 1.26 | 150 | 0.3421 |
| 0.3318 | 1.47 | 175 | 0.3365 |
| 0.3377 | 1.68 | 200 | 0.3318 |
| 0.3111 | 1.89 | 225 | 0.3287 |
| 0.3324 | 2.1 | 250 | 0.3242 |
| 0.2757 | 2.31 | 275 | 0.3254 |
| 0.3061 | 2.52 | 300 | 0.3253 |
| 0.3015 | 2.73 | 325 | 0.3218 |
| 0.2674 | 2.94 | 350 | 0.3209 |
| 0.2579 | 3.15 | 375 | 0.3208 |
| 0.2601 | 3.36 | 400 | 0.3225 |
| 0.2396 | 3.57 | 425 | 0.3222 |
| 0.2504 | 3.78 | 450 | 0.3220 |
| 0.2786 | 3.99 | 475 | 0.3206 |
| 0.2357 | 4.2 | 500 | 0.3206 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
kmfoda/gpt2
|
kmfoda
| 2024-01-22T15:06:20Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-01-22T15:05:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jvvelzen/taxi_3
|
jvvelzen
| 2024-01-22T15:05:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T15:05:12Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jvvelzen/taxi_3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
seatond/firstpage_c_rank32_batch4eq_7mods
|
seatond
| 2024-01-22T15:03:48Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"region:us"
] | null | 2024-01-22T15:02:47Z |
---
library_name: peft
base_model: TheBloke/Mistral-7B-Instruct-v0.2-GPTQ
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
mryoshq/Reinforce-v1
|
mryoshq
| 2024-01-22T15:01:49Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T15:01:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
haizad/q-FrozenLake-v1-4x4-noSlippery
|
haizad
| 2024-01-22T15:00:13Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T15:00:10Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="haizad/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jfcruz13/mt5-small-finetuned-litero-es
|
jfcruz13
| 2024-01-22T14:59:42Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-22T14:50:35Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-litero-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-litero-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 24.6430
- eval_rouge1: 1.3112
- eval_rouge2: 0.0339
- eval_rougeL: 1.2102
- eval_rougeLsum: 1.201
- eval_runtime: 51.8577
- eval_samples_per_second: 19.284
- eval_steps_per_second: 2.41
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
jfcruz13/mt5-small-finetuned-amazon-en-es
|
jfcruz13
| 2024-01-22T14:49:02Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-22T14:05:26Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 14.5641
- eval_rouge1: 0.8652
- eval_rouge2: 0.0
- eval_rougeL: 0.8652
- eval_rougeLsum: 0.871
- eval_runtime: 2.0535
- eval_samples_per_second: 14.122
- eval_steps_per_second: 1.948
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Abhinav28/large-v3-hi-commonvoice-11-peft-trained-adapter-withfp16-30-percent-learningrate-5
|
Abhinav28
| 2024-01-22T14:44:39Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"whisper",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2024-01-22T14:39:48Z |
---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
praveengovi/NeuralPipe-7B-slerp-2
|
praveengovi
| 2024-01-22T14:34:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T14:29:58Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp-2
NeuralPipe-7B-slerp-2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "praveengovi/NeuralPipe-7B-slerp-2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Raghavan/indictrans2-indic-en-dist-200M
|
Raghavan
| 2024-01-22T14:32:12Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"IndicTrans",
"text2text-generation",
"indictrans2",
"translation",
"ai4bharat",
"multilingual",
"custom_code",
"as",
"bn",
"brx",
"doi",
"en",
"gom",
"gu",
"hi",
"kn",
"ks",
"kas",
"mai",
"ml",
"mr",
"mni",
"mnb",
"ne",
"or",
"pa",
"sa",
"sat",
"sd",
"snd",
"ta",
"te",
"ur",
"dataset:flores-200",
"dataset:IN22-Gen",
"dataset:IN22-Conv",
"license:mit",
"autotrain_compatible",
"region:us"
] |
translation
| 2023-12-04T12:21:51Z |
---
language:
- as
- bn
- brx
- doi
- en
- gom
- gu
- hi
- kn
- ks
- kas
- mai
- ml
- mr
- mni
- mnb
- ne
- or
- pa
- sa
- sat
- sd
- snd
- ta
- te
- ur
language_details: >-
asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr,
hin_Deva, kan_Knda, kas_Arab, kas_Deva, mai_Deva, mal_Mlym, mar_Deva,
mni_Beng, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck,
snd_Arab, snd_Deva, tam_Taml, tel_Telu, urd_Arab
tags:
- indictrans2
- translation
- ai4bharat
- multilingual
license: mit
datasets:
- flores-200
- IN22-Gen
- IN22-Conv
metrics:
- bleu
- chrf
- chrf++
- comet
inference: false
---
# IndicTrans2
This is the model card of IndicTrans2 Indic-En Distilled 200M variant.
Please refer to [section 7.6: Distilled Models](https://openreview.net/forum?id=vfT4YuzAYA) in the TMLR submission for further details on model training, data and metrics.
### Usage Instructions
Please refer to the [github repository](https://github.com/AI4Bharat/IndicTrans2/tree/main/huggingface_inference) for a detail description on how to use HF compatible IndicTrans2 models for inference.
### Citation
If you consider using our work then please cite using:
```
@article{ai4bharat2023indictrans2,
title = {IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages},
author = {AI4Bharat and Jay Gala and Pranjal A. Chitale and Raghavan AK and Sumanth Doddapaneni and Varun Gumma and Aswanth Kumar and Janki Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M. Khapra and Raj Dabre and Anoop Kunchukuttan},
year = {2023},
journal = {arXiv preprint arXiv: 2305.16307}
}
```
|
Evan-Lin/dpo-llama2-deprecated
|
Evan-Lin
| 2024-01-22T14:31:31Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-22T10:05:06Z |
---
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-chat-hf
model-index:
- name: dpo-llama2-deprecated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo-llama2-deprecated
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5246
- Rewards/chosen: 0.5279
- Rewards/rejected: -0.0974
- Rewards/accuracies: 0.7939
- Rewards/margins: 0.6253
- Logps/rejected: -74.5910
- Logps/chosen: -63.2702
- Logits/rejected: -0.4513
- Logits/chosen: -0.4830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6078 | 0.15 | 50 | 0.6238 | 0.5207 | 0.0974 | 0.6588 | 0.4233 | -72.6424 | -63.3416 | -0.4914 | -0.5620 |
| 0.5223 | 0.3 | 100 | 0.5246 | 0.5279 | -0.0974 | 0.7939 | 0.6253 | -74.5910 | -63.2702 | -0.4513 | -0.4830 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
dizzyme/xls-r-300m-model-G
|
dizzyme
| 2024-01-22T14:31:25Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T09:08:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-model-G
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-model-G
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7710
- Wer: 1.0
- Cer: 0.8800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:---:|:------:|
| 32.1048 | 1.0 | 47 | 31.2303 | 1.0 | 1.2082 |
| 30.3148 | 2.0 | 94 | 25.3574 | 1.0 | 1.0 |
| 24.0771 | 3.0 | 141 | 14.3321 | 1.0 | 1.0 |
| 15.5757 | 4.0 | 188 | 8.7981 | 1.0 | 1.0 |
| 10.6789 | 5.0 | 235 | 6.3534 | 1.0 | 1.0 |
| 7.5286 | 6.0 | 282 | 5.0039 | 1.0 | 1.0 |
| 5.5091 | 7.0 | 329 | 4.2043 | 1.0 | 1.0 |
| 4.4551 | 8.0 | 376 | 3.7495 | 1.0 | 1.0 |
| 3.9291 | 9.0 | 423 | 3.4757 | 1.0 | 1.0 |
| 3.5432 | 10.0 | 470 | 3.3164 | 1.0 | 1.0 |
| 3.3702 | 11.0 | 517 | 3.2356 | 1.0 | 1.0 |
| 3.2758 | 12.0 | 564 | 3.1968 | 1.0 | 1.0 |
| 3.2236 | 13.0 | 611 | 3.1779 | 1.0 | 1.0 |
| 3.1857 | 14.0 | 658 | 3.1688 | 1.0 | 1.0 |
| 3.1704 | 15.0 | 705 | 3.1606 | 1.0 | 1.0 |
| 3.1714 | 16.0 | 752 | 3.1593 | 1.0 | 1.0 |
| 3.1537 | 17.0 | 799 | 3.1548 | 1.0 | 1.0 |
| 3.1519 | 18.0 | 846 | 3.1532 | 1.0 | 1.0 |
| 3.1496 | 19.0 | 893 | 3.1535 | 1.0 | 1.0 |
| 3.1441 | 20.0 | 940 | 3.1499 | 1.0 | 1.0 |
| 3.1456 | 21.0 | 987 | 3.1489 | 1.0 | 1.0 |
| 3.1409 | 22.0 | 1034 | 3.1480 | 1.0 | 1.0 |
| 3.1438 | 23.0 | 1081 | 3.1459 | 1.0 | 1.0 |
| 3.1382 | 24.0 | 1128 | 3.1447 | 1.0 | 1.0 |
| 3.1377 | 25.0 | 1175 | 3.1451 | 1.0 | 1.0 |
| 3.1306 | 26.0 | 1222 | 3.1409 | 1.0 | 1.0 |
| 3.1381 | 27.0 | 1269 | 3.1407 | 1.0 | 1.0 |
| 3.1358 | 28.0 | 1316 | 3.1393 | 1.0 | 1.0 |
| 3.139 | 29.0 | 1363 | 3.1333 | 1.0 | 1.0 |
| 3.1216 | 30.0 | 1410 | 3.1159 | 1.0 | 1.0 |
| 3.1206 | 31.0 | 1457 | 3.0811 | 1.0 | 1.0 |
| 3.081 | 32.0 | 1504 | 3.0663 | 1.0 | 1.0 |
| 3.0502 | 33.0 | 1551 | 3.1086 | 1.0 | 1.0 |
| 3.0215 | 34.0 | 1598 | 3.0463 | 1.0 | 1.0 |
| 2.9894 | 35.0 | 1645 | 2.9205 | 1.0 | 1.0 |
| 2.9546 | 36.0 | 1692 | 2.9391 | 1.0 | 1.0 |
| 2.9263 | 37.0 | 1739 | 2.8987 | 1.0 | 1.0 |
| 2.9039 | 38.0 | 1786 | 2.8631 | 1.0 | 1.0 |
| 2.8882 | 39.0 | 1833 | 2.8381 | 1.0 | 0.9965 |
| 2.8664 | 40.0 | 1880 | 2.8295 | 1.0 | 0.9936 |
| 2.8564 | 41.0 | 1927 | 2.8181 | 1.0 | 0.9781 |
| 2.8354 | 42.0 | 1974 | 2.8205 | 1.0 | 0.9577 |
| 2.824 | 43.0 | 2021 | 2.8253 | 1.0 | 0.9421 |
| 2.8115 | 44.0 | 2068 | 2.8224 | 1.0 | 0.9337 |
| 2.7896 | 45.0 | 2115 | 2.8131 | 1.0 | 0.9047 |
| 2.7896 | 46.0 | 2162 | 2.8056 | 1.0 | 0.9054 |
| 2.7827 | 47.0 | 2209 | 2.7961 | 1.0 | 0.8800 |
| 2.7782 | 48.0 | 2256 | 2.7957 | 1.0 | 0.9012 |
| 2.7735 | 49.0 | 2303 | 2.7653 | 1.0 | 0.8673 |
| 2.764 | 50.0 | 2350 | 2.7609 | 1.0 | 0.8709 |
| 2.7556 | 51.0 | 2397 | 2.7868 | 1.0 | 0.8793 |
| 2.749 | 52.0 | 2444 | 2.7823 | 1.0 | 0.8687 |
| 2.7536 | 53.0 | 2491 | 2.7569 | 1.0 | 0.8631 |
| 2.7514 | 54.0 | 2538 | 2.7655 | 1.0 | 0.8751 |
| 2.7421 | 55.0 | 2585 | 2.7617 | 1.0 | 0.8652 |
| 2.7364 | 56.0 | 2632 | 2.7755 | 1.0 | 0.8800 |
| 2.732 | 57.0 | 2679 | 2.7698 | 1.0 | 0.8744 |
| 2.7358 | 58.0 | 2726 | 2.7808 | 1.0 | 0.8843 |
| 2.7343 | 59.0 | 2773 | 2.7706 | 1.0 | 0.8793 |
| 2.7345 | 60.0 | 2820 | 2.7710 | 1.0 | 0.8800 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
KelvinTichana2/falcon-7b-sharded-bf16-finetuned-mental-health-conversational
|
KelvinTichana2
| 2024-01-22T14:22:02Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"base_model:adapter:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-01-15T14:08:16Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon-7b-sharded-bf16-finetuned-mental-health-conversational
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon-7b-sharded-bf16-finetuned-mental-health-conversational
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 320
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
paultrust100/opt-125m_int_4
|
paultrust100
| 2024-01-22T14:16:48Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"region:us"
] | null | 2024-01-18T20:33:21Z |
---
library_name: peft
base_model: facebook/opt-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
bramsikkens/ppo-LunarLander-V2
|
bramsikkens
| 2024-01-22T14:16:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T14:16:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.66 +/- 37.37
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
IamtheNigger/Hazegawa_Daisuke
|
IamtheNigger
| 2024-01-22T14:12:03Z | 0 | 0 | null |
[
"music",
"ja",
"en",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-22T14:08:36Z |
---
license: bigscience-bloom-rail-1.0
language:
- ja
- en
tags:
- music
---
|
jvvelzen/taxi_2
|
jvvelzen
| 2024-01-22T14:04:26Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T14:04:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi_2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jvvelzen/taxi_2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
shewster/autotrain-cbs3a-q101h
|
shewster
| 2024-01-22T13:57:41Z | 6 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"en",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T04:54:31Z |
---
tags:
- autotrain
- text-generation
widget:
- text: 'I love AutoTrain because '
license: mit
library_name: peft
base_model: microsoft/phi-2
pipeline_tag: text-generation
language:
- en
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
nullne/Reinforce-Pixelcopter
|
nullne
| 2024-01-22T13:53:44Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T13:53:38Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 41.60 +/- 28.56
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sabretoothedhugs/dqnspaceinvader
|
sabretoothedhugs
| 2024-01-22T13:45:17Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-21T23:42:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 758.00 +/- 216.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sabretoothedhugs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sabretoothedhugs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sabretoothedhugs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tanatapanun/fine-tuned-BioBART-20-epochs-test
|
tanatapanun
| 2024-01-22T13:36:58Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T08:34:43Z |
---
base_model: checkpoint_global_step_200000
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-20-epochs-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-20-epochs-test
This model is a fine-tuned version of [checkpoint_global_step_200000](https://huggingface.co/checkpoint_global_step_200000) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1997
- Rouge1: 0.0956
- Rouge2: 0.0145
- Rougel: 0.0591
- Rougelsum: 0.0593
- Gen Len: 217.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.7878 | 1.0 | 1201 | 0.2002 | 0.0888 | 0.0237 | 0.0689 | 0.0691 | 224.36 |
| 0.2064 | 2.0 | 2402 | 0.1817 | 0.02 | 0.0 | 0.02 | 0.02 | 8.0 |
| 0.1708 | 3.0 | 3603 | 0.1638 | 0.03 | 0.0 | 0.03 | 0.03 | 5.0 |
| 0.136 | 4.0 | 4804 | 0.1576 | 0.0228 | 0.0036 | 0.0232 | 0.023 | 10.0 |
| 0.1346 | 5.0 | 6005 | 0.1559 | 0.0631 | 0.018 | 0.0592 | 0.0591 | 11.0 |
| 0.097 | 6.0 | 7206 | 0.1573 | 0.0928 | 0.0177 | 0.0784 | 0.079 | 20.0 |
| 0.086 | 7.0 | 8407 | 0.1607 | 0.0638 | 0.0086 | 0.0522 | 0.0523 | 21.0 |
| 0.0638 | 8.0 | 9608 | 0.1649 | 0.0228 | 0.0036 | 0.0232 | 0.023 | 10.0 |
| 0.0425 | 9.0 | 10809 | 0.1690 | 0.064 | 0.0198 | 0.0578 | 0.0579 | 20.0 |
| 0.0359 | 10.0 | 12010 | 0.1726 | 0.1024 | 0.0157 | 0.0817 | 0.0817 | 49.0 |
| 0.0262 | 11.0 | 13211 | 0.1771 | 0.0868 | 0.0198 | 0.0787 | 0.0792 | 20.0 |
| 0.0204 | 12.0 | 14412 | 0.1819 | 0.0977 | 0.0104 | 0.0748 | 0.075 | 43.0 |
| 0.0156 | 13.0 | 15613 | 0.1852 | 0.066 | 0.0094 | 0.0509 | 0.051 | 43.0 |
| 0.0131 | 14.0 | 16814 | 0.1885 | 0.1068 | 0.018 | 0.0726 | 0.0725 | 135.0 |
| 0.0105 | 15.0 | 18015 | 0.1915 | 0.0967 | 0.0248 | 0.0784 | 0.0787 | 30.0 |
| 0.009 | 16.0 | 19216 | 0.1950 | 0.104 | 0.0221 | 0.0791 | 0.079 | 73.0 |
| 0.0081 | 17.0 | 20417 | 0.1962 | 0.0967 | 0.0248 | 0.0784 | 0.0787 | 30.0 |
| 0.0077 | 18.0 | 21618 | 0.1978 | 0.0903 | 0.0084 | 0.0567 | 0.0567 | 174.0 |
| 0.0068 | 19.0 | 22819 | 0.1991 | 0.0896 | 0.0107 | 0.055 | 0.055 | 174.0 |
| 0.0064 | 20.0 | 24020 | 0.1997 | 0.0956 | 0.0145 | 0.0591 | 0.0593 | 217.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
pokjay/Reinforce-Pixelcopter-PLE-v0
|
pokjay
| 2024-01-22T13:34:28Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T13:29:34Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 58.10 +/- 32.82
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
praut1/my_awesome_qa_model
|
praut1
| 2024-01-22T13:30:34Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-22T10:38:21Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: praut1/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# praut1/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.4212
- Validation Loss: 2.0242
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.4212 | 2.0242 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
CLMBR/binding-reconstruction-transformer-1
|
CLMBR
| 2024-01-22T13:27:52Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-17T22:48:16Z |
---
tags:
- generated_from_trainer
model-index:
- name: binding-reconstruction-transformer-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binding-reconstruction-transformer-1
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2239 | 0.03 | 76320 | 4.1939 |
| 4.0217 | 1.03 | 152640 | 4.0238 |
| 3.9118 | 0.03 | 228960 | 3.9500 |
| 3.8448 | 1.03 | 305280 | 3.9093 |
| 3.7955 | 0.03 | 381600 | 3.8845 |
| 3.7518 | 1.03 | 457920 | 3.8681 |
| 3.7165 | 0.03 | 534240 | 3.8579 |
| 3.6877 | 1.03 | 610560 | 3.8510 |
| 3.6604 | 0.03 | 686880 | 3.8468 |
| 3.6352 | 1.03 | 763200 | 3.8442 |
| 3.6113 | 0.03 | 839520 | 3.8420 |
| 3.592 | 1.03 | 915840 | 3.8417 |
| 3.5744 | 0.03 | 992160 | 3.8415 |
| 3.553 | 1.03 | 1068480 | 3.8429 |
| 3.5374 | 0.03 | 1144800 | 3.8430 |
| 3.527 | 1.03 | 1221120 | 3.8430 |
| 3.5111 | 0.03 | 1297440 | 3.8450 |
| 3.497 | 1.03 | 1373760 | 3.8470 |
| 3.4839 | 0.03 | 1450080 | 3.8478 |
| 3.4754 | 1.03 | 1526400 | 3.8493 |
| 3.4673 | 0.03 | 1602720 | 3.8500 |
| 3.4558 | 1.03 | 1679040 | 3.8519 |
| 3.4446 | 0.03 | 1755360 | 3.8529 |
| 3.4357 | 1.03 | 1831680 | 3.8547 |
| 3.4233 | 0.03 | 1908000 | 3.8563 |
| 3.4091 | 1.03 | 1984320 | 3.8579 |
| 3.3997 | 0.03 | 2060640 | 3.8585 |
| 3.3889 | 0.03 | 2136960 | 3.8605 |
| 3.3795 | 1.03 | 2213280 | 3.8612 |
| 3.3641 | 0.03 | 2289600 | 3.8622 |
| 3.3563 | 1.03 | 2365920 | 3.8623 |
| 3.3489 | 0.03 | 2442240 | 3.8630 |
| 3.3375 | 1.03 | 2518560 | 3.8639 |
| 3.329 | 0.03 | 2594880 | 3.8642 |
| 3.3166 | 1.03 | 2671200 | 3.8646 |
| 3.3137 | 0.03 | 2747520 | 3.8643 |
| 3.3086 | 1.03 | 2823840 | 3.8641 |
| 3.2996 | 0.03 | 2900160 | 3.8636 |
| 3.2927 | 1.03 | 2976480 | 3.8627 |
| 3.2855 | 0.02 | 3052726 | 3.8611 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
imagepipeline/Midjourney-Mimic-v1.2
|
imagepipeline
| 2024-01-22T13:20:30Z | 0 | 2 | null |
[
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-22T13:19:08Z |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Midjourney-Mimic-v1.2
<img src="https://f005.backblazeb2.com/file/imageai-model-images/midjourney-imagepipeline.png" alt="Generated by Image Pipeline" style="border-radius: 10px;">
**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - LoRa mimicking midjourney slyle v5.2 This LoRA works as: Detail tweaker (supplements the picture with details) Color enhancer (adds contrast and brightness) BG depth enhancer (adds depth on the background) IMPORTANT: CFG Scale 4 - 6, use only with weight from 0.2 to 0.8! you can set more, but the picture will be too sharp and can break proportions. USE JuggernautXL model for better results.
[](https://imagepipeline.io/models/Midjourney-Mimic-v1.2?id=2ac68c15-7d9b-49e0-a4a2-796d3093a555/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "sdxl",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "2ac68c15-7d9b-49e0-a4a2-796d3093a555",
"lora_weights": "0.5"
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
Ghunghru/gbert-base
|
Ghunghru
| 2024-01-22T13:00:22Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:deepset/gbert-base",
"base_model:finetune:deepset/gbert-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-12T13:23:27Z |
---
license: mit
base_model: deepset/gbert-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: gbert-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gbert-base
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6361
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.6805 | 1.0 | 189 | 0.6439 | 0.0 |
| 0.6838 | 2.0 | 378 | 0.6409 | 0.0 |
| 0.6668 | 3.0 | 567 | 0.6376 | 0.0 |
| 0.6666 | 4.0 | 756 | 0.6388 | 0.0 |
| 0.684 | 5.0 | 945 | 0.6372 | 0.0 |
| 0.673 | 6.0 | 1134 | 0.6419 | 0.0 |
| 0.7006 | 7.0 | 1323 | 0.6381 | 0.0 |
| 0.6819 | 8.0 | 1512 | 0.6404 | 0.0 |
| 0.6937 | 9.0 | 1701 | 0.6387 | 0.0 |
| 0.6809 | 10.0 | 1890 | 0.6375 | 0.0 |
| 0.6753 | 11.0 | 2079 | 0.6386 | 0.0 |
| 0.6688 | 12.0 | 2268 | 0.6449 | 0.0 |
| 0.6898 | 13.0 | 2457 | 0.6407 | 0.0 |
| 0.6682 | 14.0 | 2646 | 0.6458 | 0.0 |
| 0.6923 | 15.0 | 2835 | 0.6498 | 0.0 |
| 0.6961 | 16.0 | 3024 | 0.6482 | 0.0 |
| 0.6934 | 17.0 | 3213 | 0.6432 | 0.0 |
| 0.6853 | 18.0 | 3402 | 0.6457 | 0.0 |
| 0.6747 | 19.0 | 3591 | 0.6489 | 0.0 |
| 0.6939 | 20.0 | 3780 | 0.6465 | 0.0 |
| 0.6838 | 21.0 | 3969 | 0.6425 | 0.0 |
| 0.6725 | 22.0 | 4158 | 0.6401 | 0.0 |
| 0.6736 | 23.0 | 4347 | 0.6435 | 0.0 |
| 0.6705 | 24.0 | 4536 | 0.6425 | 0.0 |
| 0.6838 | 25.0 | 4725 | 0.6408 | 0.0 |
| 0.6742 | 26.0 | 4914 | 0.6417 | 0.0 |
| 0.6658 | 27.0 | 5103 | 0.6405 | 0.0 |
| 0.6672 | 28.0 | 5292 | 0.6445 | 0.0 |
| 0.6845 | 29.0 | 5481 | 0.6403 | 0.0 |
| 0.661 | 30.0 | 5670 | 0.6408 | 0.0 |
| 0.6775 | 31.0 | 5859 | 0.6394 | 0.0 |
| 0.6556 | 32.0 | 6048 | 0.6420 | 0.0 |
| 0.6708 | 33.0 | 6237 | 0.6387 | 0.0 |
| 0.6633 | 34.0 | 6426 | 0.6384 | 0.0 |
| 0.6536 | 35.0 | 6615 | 0.6401 | 0.0 |
| 0.6681 | 36.0 | 6804 | 0.6383 | 0.0 |
| 0.6573 | 37.0 | 6993 | 0.6381 | 0.0 |
| 0.6489 | 38.0 | 7182 | 0.6381 | 0.0 |
| 0.6806 | 39.0 | 7371 | 0.6347 | 0.0 |
| 0.6267 | 40.0 | 7560 | 0.6373 | 0.0 |
| 0.6577 | 41.0 | 7749 | 0.6343 | 0.0 |
| 0.6464 | 42.0 | 7938 | 0.6347 | 0.0 |
| 0.6325 | 43.0 | 8127 | 0.6361 | 0.0 |
| 0.6583 | 44.0 | 8316 | 0.6363 | 0.0 |
| 0.6634 | 45.0 | 8505 | 0.6355 | 0.0 |
| 0.6504 | 46.0 | 8694 | 0.6347 | 0.0 |
| 0.6457 | 47.0 | 8883 | 0.6356 | 0.0 |
| 0.632 | 48.0 | 9072 | 0.6362 | 0.0 |
| 0.651 | 49.0 | 9261 | 0.6362 | 0.0 |
| 0.6538 | 50.0 | 9450 | 0.6361 | 0.0 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tzs/dqn-SpaceInvadersNoFrameskip-v4
|
tzs
| 2024-01-22T12:54:35Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T12:49:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 675.50 +/- 220.23
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tzs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tzs -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tzs
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
abideen/Heimer-dpo-TinyLlama-1.1B
|
abideen
| 2024-01-22T12:47:10Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T",
"abideen/Heimer-ipo-TinyLlama-1.1B",
"abideen/Heimer-kto-TinyLlama-1.1B",
"Intel/orca_dpo_pairs",
"en",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T19:03:56Z |
---
license: apache-2.0
tags:
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- abideen/Heimer-ipo-TinyLlama-1.1B
- abideen/Heimer-kto-TinyLlama-1.1B
- Intel/orca_dpo_pairs
language:
- en
datasets:
- Intel/orca_dpo_pairs
library_name: transformers
---
# Heimer-dpo-TinyLlama-1.1B

# WandB Experiment Tracking
Check out the experiment details in this [report](https://api.wandb.ai/links/zaiinn440/dqlt70dc)

# 🧩 DPO adaptation hyperparameters
## LoRA:
r=8
lora_alpha=16
lora_dropout=0.05
bias="none"
task_type="CAUSAL_LM"
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
## Training arguments:
per_device_train_batch_size=2
gradient_accumulation_steps=4
gradient_checkpointing=True
learning_rate=5e-5
lr_scheduler_type="cosine"
max_steps=50
optim="paged_adamw_32bit"
warmup_steps=10
## DPOTrainer:
beta=0.3
max_prompt_length=1024
max_length=1536
## 💻 Usage
Here's a [Colab notebook](https://colab.research.google.com/drive/11KEX1LG3nRBoeGR0Iyy-459XllGlLOA9?usp=sharing) to run Heimer-TinyLLama-1.1B in 4-bit precision on a free T4 GPU.
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abideen/Heimer-dpo-TinyLlama-1.1B"
messages = [{"role": "user", "content": "Explain what is Data science."}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
"What is Data Science?
A data scientist is an individual who has a passion for data and knowledge of the technology that can be used to help make sense of data. Data scientists are often involved in the development of new software and software platforms, as well as analyzing and interpreting data.
What are the Important components of Data Science?
1. Data: The data is the most important component of a data science project. Data science is the application of data science to make sense of data. Data scientists usually work with data, but data scientists are not necessarily data scientists.
2. Analysis: This is the process of taking data and turning it into something useful.
3. Modeling: The use of machine learning and statistical techniques.
4. Prediction: The prediction of a future event, such as the future market share of a product or the future population of an area.
5. Visualization: Displaying the data in a graphical or interactive format.
6. Statistics: The use of statistical analysis techniques.
What are the Advantages of Data Science?
Data science is the application of data science to make sense of data."
|
ZhiguangHan/mt5-small-task2-dataset2
|
ZhiguangHan
| 2024-01-22T12:40:09Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-09T05:43:09Z |
---
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mt5-small-task2-dataset2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-task2-dataset2
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4320
- Accuracy: 0.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 7.018 | 1.0 | 250 | 1.2234 | 0.014 |
| 1.6684 | 2.0 | 500 | 0.8157 | 0.124 |
| 1.0289 | 3.0 | 750 | 0.6527 | 0.222 |
| 0.8021 | 4.0 | 1000 | 0.5877 | 0.282 |
| 0.6964 | 5.0 | 1250 | 0.5360 | 0.3 |
| 0.6252 | 6.0 | 1500 | 0.5118 | 0.32 |
| 0.5828 | 7.0 | 1750 | 0.4899 | 0.318 |
| 0.5436 | 8.0 | 2000 | 0.4718 | 0.35 |
| 0.5232 | 9.0 | 2250 | 0.4625 | 0.34 |
| 0.5005 | 10.0 | 2500 | 0.4556 | 0.342 |
| 0.4789 | 11.0 | 2750 | 0.4436 | 0.356 |
| 0.4733 | 12.0 | 3000 | 0.4379 | 0.356 |
| 0.4651 | 13.0 | 3250 | 0.4347 | 0.366 |
| 0.4591 | 14.0 | 3500 | 0.4320 | 0.37 |
| 0.4508 | 15.0 | 3750 | 0.4320 | 0.37 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
Ghunghru/Misinformation-Covid-LowLearningRatebert-base-multilingual-cased
|
Ghunghru
| 2024-01-22T12:31:28Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T12:29:00Z |
---
license: apache-2.0
base_model: bert-base-multilingual-cased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-LowLearningRatebert-base-multilingual-cased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-LowLearningRatebert-base-multilingual-cased
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5774
- F1: 0.0488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6829 | 1.0 | 189 | 0.6704 | 0.1463 |
| 0.673 | 2.0 | 378 | 0.6340 | 0.0784 |
| 0.6543 | 3.0 | 567 | 0.6453 | 0.0 |
| 0.6519 | 4.0 | 756 | 0.6439 | 0.0 |
| 0.6598 | 5.0 | 945 | 0.6427 | 0.0 |
| 0.65 | 6.0 | 1134 | 0.6416 | 0.0 |
| 0.673 | 7.0 | 1323 | 0.6415 | 0.0 |
| 0.6573 | 8.0 | 1512 | 0.6411 | 0.0 |
| 0.6641 | 9.0 | 1701 | 0.6404 | 0.0 |
| 0.667 | 10.0 | 1890 | 0.6398 | 0.0 |
| 0.6646 | 11.0 | 2079 | 0.6387 | 0.0 |
| 0.6552 | 12.0 | 2268 | 0.6377 | 0.0 |
| 0.6617 | 13.0 | 2457 | 0.6368 | 0.0 |
| 0.649 | 14.0 | 2646 | 0.6352 | 0.0 |
| 0.663 | 15.0 | 2835 | 0.6338 | 0.0 |
| 0.6506 | 16.0 | 3024 | 0.6322 | 0.0 |
| 0.6627 | 17.0 | 3213 | 0.6306 | 0.0 |
| 0.6492 | 18.0 | 3402 | 0.6288 | 0.0 |
| 0.6457 | 19.0 | 3591 | 0.6262 | 0.0 |
| 0.6448 | 20.0 | 3780 | 0.6238 | 0.0 |
| 0.6431 | 21.0 | 3969 | 0.6211 | 0.0 |
| 0.6412 | 22.0 | 4158 | 0.6189 | 0.0 |
| 0.6333 | 23.0 | 4347 | 0.6151 | 0.0 |
| 0.6435 | 24.0 | 4536 | 0.6121 | 0.0 |
| 0.6325 | 25.0 | 4725 | 0.6092 | 0.0 |
| 0.6271 | 26.0 | 4914 | 0.6047 | 0.0 |
| 0.6234 | 27.0 | 5103 | 0.6018 | 0.0 |
| 0.6185 | 28.0 | 5292 | 0.5993 | 0.0 |
| 0.6274 | 29.0 | 5481 | 0.5964 | 0.0 |
| 0.6129 | 30.0 | 5670 | 0.5942 | 0.0 |
| 0.6204 | 31.0 | 5859 | 0.5921 | 0.0 |
| 0.6044 | 32.0 | 6048 | 0.5913 | 0.0 |
| 0.6103 | 33.0 | 6237 | 0.5891 | 0.0 |
| 0.6005 | 34.0 | 6426 | 0.5868 | 0.0 |
| 0.6058 | 35.0 | 6615 | 0.5865 | 0.0 |
| 0.6179 | 36.0 | 6804 | 0.5846 | 0.0 |
| 0.6077 | 37.0 | 6993 | 0.5835 | 0.0 |
| 0.5964 | 38.0 | 7182 | 0.5832 | 0.0 |
| 0.6106 | 39.0 | 7371 | 0.5813 | 0.0 |
| 0.5865 | 40.0 | 7560 | 0.5816 | 0.0 |
| 0.6142 | 41.0 | 7749 | 0.5795 | 0.0 |
| 0.5903 | 42.0 | 7938 | 0.5790 | 0.0 |
| 0.5926 | 43.0 | 8127 | 0.5790 | 0.0 |
| 0.6077 | 44.0 | 8316 | 0.5786 | 0.0 |
| 0.6025 | 45.0 | 8505 | 0.5780 | 0.0 |
| 0.604 | 46.0 | 8694 | 0.5771 | 0.0488 |
| 0.5875 | 47.0 | 8883 | 0.5774 | 0.0488 |
| 0.5797 | 48.0 | 9072 | 0.5775 | 0.0488 |
| 0.6054 | 49.0 | 9261 | 0.5775 | 0.0488 |
| 0.5974 | 50.0 | 9450 | 0.5774 | 0.0488 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
FounderOfHuggingface/gpt2_lora_r4_e2e_nlg_t42000_e5
|
FounderOfHuggingface
| 2024-01-22T12:28:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | 2024-01-22T12:28:22Z |
---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
vlad-skripniuk/ppo-LunarLander-v2
|
vlad-skripniuk
| 2024-01-22T12:17:29Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-22T12:17:08Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 240.24 +/- 22.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ISTNetworks/Mistral_arabic_v2
|
ISTNetworks
| 2024-01-22T12:14:23Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"mistral",
"en",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-22T11:02:04Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- mistral
- gguf
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Uploaded model
- **Developed by:** ISTNetworks
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Mistral-7B-Instruct-arabic_v0.2
|
arun100/whisper-base-tr-1
|
arun100
| 2024-01-22T12:11:26Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"tr",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T05:43:05Z |
---
language:
- tr
license: apache-2.0
base_model: openai/whisper-base
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Turkish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 tr
type: mozilla-foundation/common_voice_16_0
config: tr
split: test
args: tr
metrics:
- name: Wer
type: wer
value: 30.362728902323294
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Turkish
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_16_0 tr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4280
- Wer: 30.3627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4162 | 1.03 | 500 | 0.5339 | 35.4346 |
| 0.3776 | 2.06 | 1000 | 0.4788 | 33.4892 |
| 0.3238 | 4.02 | 1500 | 0.4568 | 31.9497 |
| 0.2714 | 5.06 | 2000 | 0.4469 | 31.4277 |
| 0.3232 | 7.02 | 2500 | 0.4386 | 31.0991 |
| 0.2324 | 8.05 | 3000 | 0.4353 | 30.7406 |
| 0.2953 | 10.01 | 3500 | 0.4306 | 30.6035 |
| 0.2878 | 11.04 | 4000 | 0.4292 | 30.4278 |
| 0.3077 | 13.01 | 4500 | 0.4286 | 30.4155 |
| 0.2914 | 14.04 | 5000 | 0.4280 | 30.3627 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
kreabs/UNA-TheBeagle-7b-v1_finetuned_dolly_1600
|
kreabs
| 2024-01-22T12:09:58Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T12:02:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ghunghru/Misinformation-Covid-LowLearningRatebert-base-chinese
|
Ghunghru
| 2024-01-22T12:01:59Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T12:00:33Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: Misinformation-Covid-LowLearningRatebert-base-chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Misinformation-Covid-LowLearningRatebert-base-chinese
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5999
- F1: 0.2128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6765 | 1.0 | 189 | 0.6464 | 0.0 |
| 0.6809 | 2.0 | 378 | 0.6449 | 0.0 |
| 0.6734 | 3.0 | 567 | 0.6651 | 0.0 |
| 0.6827 | 4.0 | 756 | 0.6684 | 0.0 |
| 0.7095 | 5.0 | 945 | 0.6532 | 0.0 |
| 0.7 | 6.0 | 1134 | 0.6646 | 0.0 |
| 0.7192 | 7.0 | 1323 | 0.6497 | 0.0 |
| 0.6877 | 8.0 | 1512 | 0.6446 | 0.0 |
| 0.6831 | 9.0 | 1701 | 0.6305 | 0.0571 |
| 0.6633 | 10.0 | 1890 | 0.6203 | 0.1622 |
| 0.6668 | 11.0 | 2079 | 0.6219 | 0.1622 |
| 0.6482 | 12.0 | 2268 | 0.6242 | 0.1111 |
| 0.6543 | 13.0 | 2457 | 0.6117 | 0.15 |
| 0.6492 | 14.0 | 2646 | 0.6236 | 0.1622 |
| 0.6624 | 15.0 | 2835 | 0.6233 | 0.1622 |
| 0.6525 | 16.0 | 3024 | 0.6134 | 0.15 |
| 0.6466 | 17.0 | 3213 | 0.6118 | 0.1905 |
| 0.6406 | 18.0 | 3402 | 0.6191 | 0.15 |
| 0.6479 | 19.0 | 3591 | 0.6216 | 0.1538 |
| 0.6488 | 20.0 | 3780 | 0.6076 | 0.2128 |
| 0.6352 | 21.0 | 3969 | 0.6062 | 0.2174 |
| 0.6213 | 22.0 | 4158 | 0.6042 | 0.2174 |
| 0.6285 | 23.0 | 4347 | 0.6100 | 0.2326 |
| 0.6298 | 24.0 | 4536 | 0.6076 | 0.2128 |
| 0.6473 | 25.0 | 4725 | 0.6058 | 0.2128 |
| 0.5972 | 26.0 | 4914 | 0.6065 | 0.2222 |
| 0.6118 | 27.0 | 5103 | 0.6001 | 0.25 |
| 0.6116 | 28.0 | 5292 | 0.6059 | 0.2128 |
| 0.6289 | 29.0 | 5481 | 0.5992 | 0.25 |
| 0.5932 | 30.0 | 5670 | 0.6006 | 0.25 |
| 0.6076 | 31.0 | 5859 | 0.6009 | 0.2128 |
| 0.6033 | 32.0 | 6048 | 0.6082 | 0.2128 |
| 0.6235 | 33.0 | 6237 | 0.6023 | 0.2128 |
| 0.6237 | 34.0 | 6426 | 0.6079 | 0.2222 |
| 0.6176 | 35.0 | 6615 | 0.6081 | 0.2222 |
| 0.646 | 36.0 | 6804 | 0.6019 | 0.2128 |
| 0.6233 | 37.0 | 6993 | 0.6020 | 0.2128 |
| 0.6004 | 38.0 | 7182 | 0.6040 | 0.2174 |
| 0.6159 | 39.0 | 7371 | 0.5963 | 0.2449 |
| 0.5747 | 40.0 | 7560 | 0.6011 | 0.2174 |
| 0.6216 | 41.0 | 7749 | 0.5954 | 0.2449 |
| 0.5893 | 42.0 | 7938 | 0.5974 | 0.2083 |
| 0.5887 | 43.0 | 8127 | 0.5993 | 0.2128 |
| 0.5756 | 44.0 | 8316 | 0.5993 | 0.2128 |
| 0.6204 | 45.0 | 8505 | 0.5982 | 0.2083 |
| 0.584 | 46.0 | 8694 | 0.5966 | 0.2449 |
| 0.5809 | 47.0 | 8883 | 0.5989 | 0.2083 |
| 0.5873 | 48.0 | 9072 | 0.6002 | 0.2128 |
| 0.5999 | 49.0 | 9261 | 0.6001 | 0.2128 |
| 0.5888 | 50.0 | 9450 | 0.5999 | 0.2128 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Md-Z/finetuned-phi2-financial-sentiment-analysis
|
Md-Z
| 2024-01-22T11:58:19Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-01-22T11:45:23Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: finetuned-phi2-financial-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-phi2-financial-sentiment-analysis
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the FinancialPhraseBank dataset. The FinancialPhraseBank dataset is a comprehensive collection that captures the sentiments of financial news headlines from the viewpoint of a retail investor. Comprising two key columns, namely "Sentiment" and "News Headline," the dataset effectively classifies sentiments as either negative, neutral, or positive. This structured dataset serves as a valuable resource for analyzing and understanding the complex dynamics of sentiment in the domain of financial news.
It achieves the following results on the evaluation set:
- Loss: 1.4052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8067 | 1.0 | 112 | 1.5200 |
| 1.5055 | 2.0 | 225 | 1.4345 |
| 1.5221 | 3.0 | 337 | 1.4083 |
| 1.4956 | 3.98 | 448 | 1.4052 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ashishsr/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapter-dxtoicd-v4
|
ashishsr
| 2024-01-22T11:57:42Z | 5 | 0 |
peft
|
[
"peft",
"text-generation",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] |
text-generation
| 2024-01-22T11:20:10Z |
---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ssands1979/FrankenPhi2-4x
|
ssands1979
| 2024-01-22T11:56:57Z | 4 | 0 |
transformers
|
[
"transformers",
"mixtral",
"text-generation",
"moe",
"frankenmoe",
"merge",
"mergekit",
"lazymergekit",
"lxuechen/phi-2-sft",
"mrm8488/phi-2-coder",
"Walmart-the-bag/phi-2-uncensored",
"ArtifactAI/phi-2-arxiv-physics-instruct",
"custom_code",
"base_model:AlgorithmicResearchGroup/phi-2-arxiv-physics-instruct",
"base_model:merge:AlgorithmicResearchGroup/phi-2-arxiv-physics-instruct",
"base_model:lxuechen/phi-2-sft",
"base_model:merge:lxuechen/phi-2-sft",
"base_model:mrm8488/phi-2-coder",
"base_model:merge:mrm8488/phi-2-coder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-22T11:56:55Z |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- lxuechen/phi-2-sft
- mrm8488/phi-2-coder
- Walmart-the-bag/phi-2-uncensored
- ArtifactAI/phi-2-arxiv-physics-instruct
base_model:
- lxuechen/phi-2-sft
- mrm8488/phi-2-coder
- Walmart-the-bag/phi-2-uncensored
- ArtifactAI/phi-2-arxiv-physics-instruct
---
# FrankenPhi2-4x
FrankenPhi2-4x is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [lxuechen/phi-2-sft](https://huggingface.co/lxuechen/phi-2-sft)
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
* [Walmart-the-bag/phi-2-uncensored](https://huggingface.co/Walmart-the-bag/phi-2-uncensored)
* [ArtifactAI/phi-2-arxiv-physics-instruct](https://huggingface.co/ArtifactAI/phi-2-arxiv-physics-instruct)
## 🧩 Configuration
```yaml
base_model: microsoft/phi-2
experts:
- source_model: lxuechen/phi-2-sft
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: mrm8488/phi-2-coder
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: Walmart-the-bag/phi-2-uncensored
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: ArtifactAI/phi-2-arxiv-physics-instruct
positive_prompts:
- "physics"
- "math"
- "mathematics"
- "solve"
- "count"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ssands1979/FrankenPhi2-4x"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
jfmatos-isq/distilbert-base-uncased-finetuned-emotion
|
jfmatos-isq
| 2024-01-22T11:47:19Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T10:31:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9251677974154898
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2078
- Accuracy: 0.925
- F1: 0.9252
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7925 | 1.0 | 250 | 0.2900 | 0.913 | 0.9113 |
| 0.2343 | 2.0 | 500 | 0.2078 | 0.925 | 0.9252 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.10.3
|
King2806/a-corgi-gft-dog
|
King2806
| 2024-01-22T11:46:51Z | 1 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-22T11:42:49Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### A-corgi-gft-dog Dreambooth model trained by King2806 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: TNU2023021100010
Sample pictures of this concept:

|
hr16/ControlNet-Depth-Anything-Pruned
|
hr16
| 2024-01-22T11:45:28Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-22T11:43:02Z |
---
license: creativeml-openrail-m
---
Redistrubted and pruned from https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet
|
HexawareTech/phi_2_tax_faq
|
HexawareTech
| 2024-01-22T11:45:04Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"phi-msft",
"custom_code",
"region:us"
] | null | 2023-12-20T16:45:27Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
riazk/mistral_adapter
|
riazk
| 2024-01-22T11:41:20Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-22T11:34:07Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Parinitha003/HuggyDoge
|
Parinitha003
| 2024-01-22T11:40:21Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-22T11:39:31Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Parinitha003/HuggyDoge
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AleRams/app_prova2
|
AleRams
| 2024-01-22T11:35:39Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-15T16:13:46Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: app_prova2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# app_prova2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1559
- Accuracy: 0.935
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4056 | 0.53 | 100 | 1.3185 | 0.4267 |
| 1.3194 | 1.06 | 200 | 1.1961 | 0.4933 |
| 1.2148 | 1.6 | 300 | 1.0185 | 0.6167 |
| 0.9494 | 2.13 | 400 | 0.8830 | 0.6567 |
| 0.9645 | 2.66 | 500 | 0.7196 | 0.7433 |
| 0.6089 | 3.19 | 600 | 0.5523 | 0.8017 |
| 0.7564 | 3.72 | 700 | 0.4789 | 0.83 |
| 0.5319 | 4.26 | 800 | 0.3553 | 0.8683 |
| 0.3567 | 4.79 | 900 | 0.2926 | 0.88 |
| 0.2969 | 5.32 | 1000 | 0.2558 | 0.89 |
| 0.2578 | 5.85 | 1100 | 0.2054 | 0.9217 |
| 0.3002 | 6.38 | 1200 | 0.1744 | 0.9333 |
| 0.293 | 6.91 | 1300 | 0.1620 | 0.9483 |
| 0.132 | 7.45 | 1400 | 0.1646 | 0.92 |
| 0.1836 | 7.98 | 1500 | 0.1559 | 0.935 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Ghunghru/German-MedBERT
|
Ghunghru
| 2024-01-22T11:33:13Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:smanjil/German-MedBERT",
"base_model:finetune:smanjil/German-MedBERT",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-19T19:07:20Z |
---
base_model: smanjil/German-MedBERT
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: German-MedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# German-MedBERT
This model is a fine-tuned version of [smanjil/German-MedBERT](https://huggingface.co/smanjil/German-MedBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5145
- F1: 0.4561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.693 | 1.0 | 189 | 0.6754 | 0.0698 |
| 0.6853 | 2.0 | 378 | 0.6626 | 0.0339 |
| 0.6654 | 3.0 | 567 | 0.6499 | 0.0488 |
| 0.6562 | 4.0 | 756 | 0.6399 | 0.0541 |
| 0.6554 | 5.0 | 945 | 0.6335 | 0.0556 |
| 0.6394 | 6.0 | 1134 | 0.6260 | 0.0571 |
| 0.6452 | 7.0 | 1323 | 0.6220 | 0.0571 |
| 0.6257 | 8.0 | 1512 | 0.6161 | 0.0571 |
| 0.6334 | 9.0 | 1701 | 0.6117 | 0.0571 |
| 0.6302 | 10.0 | 1890 | 0.6068 | 0.0571 |
| 0.6151 | 11.0 | 2079 | 0.6011 | 0.0571 |
| 0.6121 | 12.0 | 2268 | 0.5961 | 0.0571 |
| 0.6097 | 13.0 | 2457 | 0.5915 | 0.0571 |
| 0.5929 | 14.0 | 2646 | 0.5865 | 0.0556 |
| 0.5955 | 15.0 | 2835 | 0.5822 | 0.0556 |
| 0.5893 | 16.0 | 3024 | 0.5776 | 0.1053 |
| 0.5936 | 17.0 | 3213 | 0.5731 | 0.1 |
| 0.5769 | 18.0 | 3402 | 0.5687 | 0.1 |
| 0.5692 | 19.0 | 3591 | 0.5646 | 0.1 |
| 0.5739 | 20.0 | 3780 | 0.5604 | 0.2326 |
| 0.5705 | 21.0 | 3969 | 0.5564 | 0.2326 |
| 0.5651 | 22.0 | 4158 | 0.5525 | 0.2727 |
| 0.5654 | 23.0 | 4347 | 0.5494 | 0.2727 |
| 0.5527 | 24.0 | 4536 | 0.5456 | 0.2727 |
| 0.5542 | 25.0 | 4725 | 0.5425 | 0.2727 |
| 0.5464 | 26.0 | 4914 | 0.5395 | 0.2727 |
| 0.5383 | 27.0 | 5103 | 0.5364 | 0.3111 |
| 0.5323 | 28.0 | 5292 | 0.5348 | 0.3111 |
| 0.5343 | 29.0 | 5481 | 0.5318 | 0.3404 |
| 0.5305 | 30.0 | 5670 | 0.5299 | 0.4082 |
| 0.5252 | 31.0 | 5859 | 0.5278 | 0.4 |
| 0.516 | 32.0 | 6048 | 0.5270 | 0.3922 |
| 0.5181 | 33.0 | 6237 | 0.5243 | 0.4231 |
| 0.5202 | 34.0 | 6426 | 0.5230 | 0.4231 |
| 0.5068 | 35.0 | 6615 | 0.5224 | 0.4231 |
| 0.514 | 36.0 | 6804 | 0.5205 | 0.4528 |
| 0.5014 | 37.0 | 6993 | 0.5194 | 0.4528 |
| 0.4899 | 38.0 | 7182 | 0.5188 | 0.4444 |
| 0.5104 | 39.0 | 7371 | 0.5164 | 0.4364 |
| 0.4823 | 40.0 | 7560 | 0.5174 | 0.4444 |
| 0.515 | 41.0 | 7749 | 0.5155 | 0.4364 |
| 0.4906 | 42.0 | 7938 | 0.5154 | 0.4364 |
| 0.4853 | 43.0 | 8127 | 0.5158 | 0.4364 |
| 0.5006 | 44.0 | 8316 | 0.5153 | 0.4364 |
| 0.503 | 45.0 | 8505 | 0.5146 | 0.4561 |
| 0.4915 | 46.0 | 8694 | 0.5141 | 0.4561 |
| 0.4903 | 47.0 | 8883 | 0.5144 | 0.4561 |
| 0.4892 | 48.0 | 9072 | 0.5146 | 0.4561 |
| 0.4939 | 49.0 | 9261 | 0.5146 | 0.4561 |
| 0.5007 | 50.0 | 9450 | 0.5145 | 0.4561 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Nazaninmnd/DreamBooth_KW
|
Nazaninmnd
| 2024-01-22T11:33:09Z | 21 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-22T10:28:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of KW
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Nazaninmnd/DreamBooth_KW
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of KW using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
|
arun100/whisper-small-derived-hi-2
|
arun100
| 2024-01-22T11:27:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:arun100/whisper-small-derived-hi-1",
"base_model:finetune:arun100/whisper-small-derived-hi-1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T03:14:27Z |
---
language:
- hi
license: apache-2.0
base_model: arun100/whisper-small-derived-hi-1
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Bengali
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 hi
type: mozilla-foundation/common_voice_16_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 9.288054935192335
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Bengali
This model is a fine-tuned version of [arun100/whisper-small-derived-hi-1](https://huggingface.co/arun100/whisper-small-derived-hi-1) on the mozilla-foundation/common_voice_16_0 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1458
- Wer: 9.2881
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1368 | 1.02 | 100 | 0.1665 | 9.6951 |
| 0.1196 | 2.05 | 200 | 0.1586 | 9.6011 |
| 0.107 | 3.07 | 300 | 0.1538 | 9.4008 |
| 0.1051 | 5.02 | 400 | 0.1504 | 9.3235 |
| 0.0988 | 6.05 | 500 | 0.1486 | 9.4467 |
| 0.0939 | 7.07 | 600 | 0.1474 | 9.4425 |
| 0.0901 | 9.02 | 700 | 0.1464 | 9.3006 |
| 0.0859 | 10.04 | 800 | 0.1459 | 9.4362 |
| 0.0859 | 11.07 | 900 | 0.1458 | 9.2881 |
| 0.0839 | 13.02 | 1000 | 0.1456 | 9.2901 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
kamal24/gpt_to_human
|
kamal24
| 2024-01-22T11:18:59Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:kamal24/gpt_to_human",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T11:09:53Z |
---
license: openrail
datasets:
- kamal24/gpt_to_human
language:
- en
library_name: transformers
inference:
parameters:
num_beams: 5
num_beam_groups: 5
num_return_sequences: 5
repetition_penalty: 10.01
diversity_penalty: 3.01
no_repeat_ngram_size: 2
temperature: 0.7
max_length: 128
widget:
- text: What are the best places to see in New York?
example_title: New York tourist attractions
- text: When should I go to the doctor?
example_title: Doctor's time
- text: >-
Rammstein's album Mutter was recorded in the south of France in May and June
2000, and mixed in Stockholm in October of that year.
example_title: Rammstein's album Mutter
pipeline_tag: text2text-generation
---
This model was trained on our [ChatGPT paraphrase dataset](https://huggingface.co/datasets/kamal24/gpt_to_human).
This dataset is based on the [Quora paraphrase question](https://www.kaggle.com/competitions/quora-question-pairs), texts from the [SQUAD 2.0](https://huggingface.co/datasets/squad_v2) and the [CNN news dataset](https://huggingface.co/datasets/cnn_dailymail).
This model is based on the T5-base model. We used "transfer learning" to get our model to generate paraphrases as well as ChatGPT. Now we can say that this is one of the best paraphrases of the Hugging Face.
[Kaggle](https://www.kaggle.com/datasets/vladimirvorobevv/chatgpt-paraphrases) link
[Author's LinkedIn](https://www.linkedin.com/in/vladimir-vorobev/) link
## Deploying example
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained("kamal24/gpt_to_human")
model = AutoModelForSeq2SeqLM.from_pretrained("kamal24/gpt_to_human").to(device)
def paraphrase(
question,
num_beams=5,
num_beam_groups=5,
num_return_sequences=5,
repetition_penalty=10.0,
diversity_penalty=3.0,
no_repeat_ngram_size=2,
temperature=0.7,
max_length=128
):
input_ids = tokenizer(
f'paraphrase: {question}',
return_tensors="pt", padding="longest",
max_length=max_length,
truncation=True,
).input_ids
outputs = model.generate(
input_ids, temperature=temperature, repetition_penalty=repetition_penalty,
num_return_sequences=num_return_sequences, no_repeat_ngram_size=no_repeat_ngram_size,
num_beams=num_beams, num_beam_groups=num_beam_groups,
max_length=max_length, diversity_penalty=diversity_penalty
)
res = tokenizer.batch_decode(outputs, skip_special_tokens=True)
return res
```
## Usage examples
**Input:**
```python
text = 'What are the best places to see in New York?'
paraphrase(text)
```
**Output:**
```python
['What are some must-see places in New York?',
'Can you suggest some must-see spots in New York?',
'Where should one go to experience the best NYC has to offer?',
'Which places should I visit in New York?',
'What are the top destinations to explore in New York?']
```
**Input:**
```python
text = "Rammstein's album Mutter was recorded in the south of France in May and June 2000, and mixed in Stockholm in October of that year."
paraphrase(text)
```
**Output:**
```python
['In May and June 2000, Rammstein travelled to the south of France to record his album Mutter, which was mixed in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year.',
'The album Mutter by Rammstein was recorded in the south of France during May and June 2000, with mixing taking place in Stockholm in October of that year. It',
'Mutter, the album released by Rammstein, was recorded in southern France during May and June 2000, with mixing taking place between October and September.',
'In May and June 2000, Rammstein recorded his album Mutter in the south of France, with the mix being made at Stockholm during October.']
```
## Train parameters
```python
epochs = 5
batch_size = 64
max_length = 128
lr = 5e-5
batches_qty = 196465
betas = (0.9, 0.999)
eps = 1e-08
```
### BibTeX entry and citation info
```bibtex
@inproceedings{chatgpt_paraphraser,
author={Vladimir Vorobev, Maxim Kuznetsov},
title={A paraphrasing model based on ChatGPT paraphrases},
year={2023}
}
```
|
wooseok0303/distilbert-base-uncased-finetuned-clinc
|
wooseok0303
| 2024-01-22T11:18:54Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T11:00:16Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8004
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3221 | 1.0 | 318 | 3.3251 | 0.7197 |
| 2.6661 | 2.0 | 636 | 1.9115 | 0.8494 |
| 1.5821 | 3.0 | 954 | 1.1901 | 0.89 |
| 1.0401 | 4.0 | 1272 | 0.8849 | 0.9087 |
| 0.8194 | 5.0 | 1590 | 0.8004 | 0.9152 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu116
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
Seri0usLee/distilbert-base-uncased-finetuned-ner
|
Seri0usLee
| 2024-01-22T11:11:00Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-22T08:44:50Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.929
- Recall: 0.9353
- F1: 0.9322
- Accuracy: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2529 | 1.0 | 878 | 0.0719 | 0.8910 | 0.9209 | 0.9057 | 0.9788 |
| 0.0504 | 2.0 | 1756 | 0.0583 | 0.9235 | 0.9332 | 0.9283 | 0.9831 |
| 0.0315 | 3.0 | 2634 | 0.0611 | 0.929 | 0.9353 | 0.9322 | 0.9837 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cpu
- Datasets 2.16.1
- Tokenizers 0.15.0
|
imagepipeline/LEOSAMs-HelloWorld-SDXL-v3
|
imagepipeline
| 2024-01-22T10:58:30Z | 46 | 5 |
diffusers
|
[
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-01-22T10:55:13Z |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## LEOSAMs-HelloWorld-SDXL-v3
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/6c2b58b8-e34c-4292-9237-311d8ddc48db/width=525/6c2b58b8-e34c-4292-9237-311d8ddc48db.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details - Better finger and limbs
[](https://imagepipeline.io/models/LEOSAMs-HelloWorld-SDXL-v3?id=5bff1e5f-8d0e-4764-9720-a3765f2d3860/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sdxl/text2image/v1/run"
payload = json.dumps({
"model_id": "5bff1e5f-8d0e-4764-9720-a3765f2d3860",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sdxl/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
Haya44/t-shirt-finetune
|
Haya44
| 2024-01-22T10:56:20Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-22T10:51:53Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### t-shirt_finetune Dreambooth model trained by Haya44 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:




































































































|
shacd/gen_tokenizer_out
|
shacd
| 2024-01-22T10:53:50Z | 2 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-22T10:53:32Z |
---
license: other
license_name: none
license_link: LICENSE
---
|
Federic/TestPrompt
|
Federic
| 2024-01-22T10:47:48Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-01-22T08:39:27Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: TestPrompt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TestPrompt
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.