modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
MarcoGr/dqn-SpaceInvadersNoFrameskip-v4 | MarcoGr | 2024-07-02T13:56:34Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-07-02T13:56:04Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 334.00 +/- 136.58
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MarcoGr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MarcoGr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MarcoGr
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 500000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Detsutut/Igea-1B-qa-GGUF-Q4 | Detsutut | 2024-07-02T14:00:20Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:Detsutut/Igea-1B-v0.0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:56:23Z | ---
base_model: Detsutut/Igea-1B-v0.0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Detsutut
- **License:** apache-2.0
- **Finetuned from model :** Detsutut/Igea-1B-v0.0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buetnlpbio/bpe-only-rna-bert | buetnlpbio | 2024-07-02T14:35:46Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:56:57Z | ---
library_name: transformers
---
<span style="color:red">`buetnlpbio/bpe-only-rna-bert` is trained on BPE tokens only (for ablation). Please consider using `buetnlpbio/birna-bert` instead.
</span>
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/birna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/bpe-only-rna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/bpe-only-rna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get sequence embeddings
seq_embed = mysterybert(**tokenizer("AGCTACGTACGT", return_tensors="pt"))
print(seq_embed.logits.shape) # CLS + 4 BPE token embeddings + SEP
```
|
buetnlpbio/birna-bert | buetnlpbio | 2024-07-02T14:31:44Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:57:25Z | ---
library_name: transformers
---
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/birna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/birna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/birna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get sequence embeddings
seq_embed = mysterybert(**tokenizer("AGCTACGTACGT", return_tensors="pt"))
print(seq_embed.logits.shape) # CLS + 4 BPE token embeddings + SEP
# To get nucleotide embeddings
char_embed = mysterybert(**tokenizer("A G C T A C G T A C G T", return_tensors="pt"))
print(char_embed.logits.shape) # CLS + 12 nucleotide token embeddings + SEP
```
## Explicitly increasing max sequence length
```python
config = transformers.BertConfig.from_pretrained("buetnlpbio/birna-bert")
config.alibi_starting_size = 2048 # maximum sequence length updated to 2048 from config default of 1024
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/birna-bert",config=config,trust_remote_code=True)
```
|
buetnlpbio/bidna-tokenizer | buetnlpbio | 2024-07-02T13:57:46Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T13:57:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
buetnlpbio/nuc-only-dna-bert | buetnlpbio | 2024-07-02T14:44:08Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:58:26Z | ---
library_name: transformers
---
<span style="color:red">`buetnlpbio/nuc-only-dna-bert` was trained on only Human Genome DNA dataset for 1 epoch (for ablation). Performance on other DNA types may be limited. </span>
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/bidna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/nuc-only-dna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/nuc-only-dna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get nucleotide embeddings
char_embed = mysterybert(**tokenizer("A G C T A C G T A C G T", return_tensors="pt"))
print(char_embed.logits.shape) # CLS + 12 nucleotide token embeddings + SEP
``` |
Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF | Poxios | 2024-07-02T14:03:05Z | 0 | 0 | null | [
"gguf",
"generated_from_trainer",
"llama-cpp",
"gguf-my-repo",
"base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T13:58:53Z | ---
base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
license: apache-2.0
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
model-index:
- name: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
results: []
---
# Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF
This model was converted to GGUF format from [`yanolja/EEVE-Korean-Instruct-10.8B-v1.0`](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF --hf-file eeve-korean-instruct-10.8b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF --hf-file eeve-korean-instruct-10.8b-v1.0-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF --hf-file eeve-korean-instruct-10.8b-v1.0-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Poxios/EEVE-Korean-Instruct-10.8B-v1.0-Q4_K_M-GGUF --hf-file eeve-korean-instruct-10.8b-v1.0-q4_k_m.gguf -c 2048
```
|
buetnlpbio/bpe-only-dna-bert | buetnlpbio | 2024-07-02T14:42:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:58:57Z | ---
library_name: transformers
---
<span style="color:red">`buetnlpbio/bpe-only-dna-bert` was trained on only Human Genome DNA dataset for 1 epoch (for ablation). Performance on other DNA types may be limited. </span>
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/bidna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/bpe-only-dna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/bpe-only-dna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get sequence embeddings
seq_embed = mysterybert(**tokenizer("AGCTACGTACGT", return_tensors="pt"))
print(seq_embed.logits.shape) # CLS + 4 BPE token embeddings + SEP
```
|
buetnlpbio/bidna-bert | buetnlpbio | 2024-07-02T14:42:01Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T13:59:29Z | ---
library_name: transformers
---
<span style="color:red">`buetnlpbio/bidna-bert` was trained on only Human Genome DNA dataset for 1 epoch (for ablation). Performance on other DNA types may be limited. </span>
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/bidna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/bidna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/bidna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get sequence embeddings
seq_embed = mysterybert(**tokenizer("AGCTACGTACGT", return_tensors="pt"))
print(seq_embed.logits.shape) # CLS + 4 BPE token embeddings + SEP
# To get nucleotide embeddings
char_embed = mysterybert(**tokenizer("A G C T A C G T A C G T", return_tensors="pt"))
print(char_embed.logits.shape) # CLS + 12 nucleotide token embeddings + SEP
```
|
chyiin/test | chyiin | 2024-07-02T14:00:15Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:00:15Z | Entry not found |
ayaenna/Age_Prediction | ayaenna | 2024-07-02T14:12:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T14:00:22Z | Entry not found |
B3V4F/happypig | B3V4F | 2024-07-02T14:03:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:00:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
houbw/llama38b_ruozhiba_6 | houbw | 2024-07-02T14:02:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:01:16Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ferrazzipietro/Llama-2-7b-chat-hfspecialTkn_en.layer1_NoQuant_16_32_0.02_8 | ferrazzipietro | 2024-07-02T14:01:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:01:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
taehyunzzz/switch-base-16-samsum-top-4-choose-1-deconly | taehyunzzz | 2024-07-03T01:20:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T14:01:52Z | ---
license: apache-2.0
base_model: google/switch-base-16
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-16-samsum-top-4-choose-1-deconly
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.4941
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-16-samsum-top-4-choose-1-deconly
This model is a fine-tuned version of [google/switch-base-16](https://huggingface.co/google/switch-base-16) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5622
- Rouge1: 47.4941
- Rouge2: 24.8589
- Rougel: 40.7324
- Rougelsum: 44.2754
- Gen Len: 16.978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.7675 | 0.2172 | 200 | 2.5114 | 34.1944 | 14.3493 | 29.9504 | 31.6992 | 12.9976 |
| 2.63 | 0.4343 | 400 | 2.0707 | 40.4717 | 18.3069 | 34.1925 | 37.4723 | 16.4291 |
| 2.4061 | 0.6515 | 600 | 1.8689 | 43.0896 | 19.9888 | 36.381 | 40.0451 | 16.8863 |
| 2.1154 | 0.8686 | 800 | 1.8168 | 43.8893 | 20.958 | 37.2973 | 40.436 | 15.5293 |
| 2.0704 | 1.0858 | 1000 | 1.7271 | 45.2335 | 22.1192 | 38.218 | 42.0079 | 16.3447 |
| 2.0927 | 1.3029 | 1200 | 1.7225 | 45.4603 | 22.5472 | 38.5835 | 41.9597 | 16.2286 |
| 2.1183 | 1.5201 | 1400 | 1.6814 | 45.948 | 22.8422 | 38.7147 | 42.5818 | 16.824 |
| 1.9599 | 1.7372 | 1600 | 1.6606 | 45.4704 | 22.6218 | 38.9036 | 42.458 | 16.1381 |
| 1.9618 | 1.9544 | 1800 | 1.6422 | 46.5133 | 23.6138 | 39.3448 | 43.0852 | 17.0599 |
| 1.7866 | 2.1716 | 2000 | 1.6326 | 46.8714 | 24.0261 | 39.8869 | 43.5714 | 16.5587 |
| 1.7856 | 2.3887 | 2200 | 1.6260 | 46.3357 | 23.9611 | 39.8074 | 43.1098 | 16.0379 |
| 1.8593 | 2.6059 | 2400 | 1.6095 | 46.484 | 24.0606 | 39.8423 | 43.4854 | 16.5795 |
| 1.7925 | 2.8230 | 2600 | 1.5955 | 46.3699 | 23.5268 | 39.6176 | 43.192 | 16.6137 |
| 1.6289 | 3.0402 | 2800 | 1.6030 | 47.3992 | 24.4279 | 40.3178 | 44.0069 | 16.9474 |
| 1.6981 | 3.2573 | 3000 | 1.5962 | 47.47 | 24.7195 | 40.6542 | 44.0862 | 16.8521 |
| 1.744 | 3.4745 | 3200 | 1.5839 | 47.6432 | 24.8733 | 40.8508 | 44.3951 | 16.9572 |
| 1.6728 | 3.6916 | 3400 | 1.5786 | 47.4388 | 24.8413 | 40.6588 | 44.2885 | 16.9071 |
| 1.67 | 3.9088 | 3600 | 1.5758 | 47.3227 | 24.5953 | 40.3763 | 44.027 | 16.802 |
| 1.5433 | 4.1260 | 3800 | 1.5741 | 47.409 | 24.6059 | 40.4526 | 44.2102 | 17.0428 |
| 1.6082 | 4.3431 | 4000 | 1.5687 | 47.5651 | 24.7096 | 40.6474 | 44.2447 | 16.8582 |
| 1.6276 | 4.5603 | 4200 | 1.5664 | 47.769 | 24.9825 | 40.8941 | 44.5362 | 16.9413 |
| 1.7141 | 4.7774 | 4400 | 1.5624 | 47.5164 | 24.8424 | 40.5704 | 44.2561 | 17.0196 |
| 1.6044 | 4.9946 | 4600 | 1.5622 | 47.4941 | 24.8589 | 40.7324 | 44.2754 | 16.978 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
taehyunzzz/switch-base-16-samsum-top-1 | taehyunzzz | 2024-07-02T23:32:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T14:01:55Z | ---
license: apache-2.0
base_model: google/switch-base-16
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-16-samsum-top-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 47.642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-16-samsum-top-1
This model is a fine-tuned version of [google/switch-base-16](https://huggingface.co/google/switch-base-16) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4152
- Rouge1: 47.642
- Rouge2: 24.455
- Rougel: 40.5325
- Rougelsum: 44.2471
- Gen Len: 16.9474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 3.6055 | 0.2172 | 200 | 2.5282 | 35.4847 | 15.2617 | 29.6237 | 32.572 | 15.0733 |
| 2.2884 | 0.4343 | 400 | 1.7336 | 42.5128 | 19.0282 | 35.3702 | 39.3583 | 16.7653 |
| 2.0769 | 0.6515 | 600 | 1.6140 | 44.2133 | 20.7417 | 37.0948 | 40.7433 | 16.8667 |
| 1.9006 | 0.8686 | 800 | 1.5648 | 45.2886 | 22.2146 | 38.1394 | 41.926 | 16.5159 |
| 1.8026 | 1.0858 | 1000 | 1.5172 | 45.0405 | 22.5057 | 38.2452 | 41.9349 | 16.3888 |
| 1.7981 | 1.3029 | 1200 | 1.5090 | 46.6269 | 23.4678 | 39.4572 | 42.9409 | 16.3594 |
| 1.8836 | 1.5201 | 1400 | 1.4766 | 46.6666 | 23.335 | 39.562 | 43.2311 | 16.8301 |
| 1.7022 | 1.7372 | 1600 | 1.4753 | 47.1723 | 24.256 | 40.3642 | 43.6667 | 16.132 |
| 1.7206 | 1.9544 | 1800 | 1.4613 | 46.6967 | 23.7383 | 39.3786 | 43.2302 | 17.4584 |
| 1.5463 | 2.1716 | 2000 | 1.4522 | 46.9588 | 23.7671 | 39.6852 | 43.2298 | 16.4425 |
| 1.5813 | 2.3887 | 2200 | 1.4460 | 46.7339 | 23.937 | 40.0396 | 43.3039 | 16.4597 |
| 1.592 | 2.6059 | 2400 | 1.4391 | 46.9846 | 24.1476 | 40.0553 | 43.6259 | 16.4719 |
| 1.5721 | 2.8230 | 2600 | 1.4331 | 47.1888 | 24.0887 | 39.9051 | 43.6749 | 16.6296 |
| 1.4388 | 3.0402 | 2800 | 1.4355 | 47.353 | 24.2403 | 40.1024 | 43.8869 | 16.8863 |
| 1.4366 | 3.2573 | 3000 | 1.4326 | 47.6898 | 24.5475 | 40.3292 | 44.1485 | 16.8276 |
| 1.4929 | 3.4745 | 3200 | 1.4260 | 47.6964 | 24.5274 | 40.5117 | 44.1713 | 16.9731 |
| 1.4679 | 3.6916 | 3400 | 1.4203 | 48.1568 | 24.9221 | 40.9733 | 44.7026 | 17.1271 |
| 1.4302 | 3.9088 | 3600 | 1.4157 | 47.9171 | 24.9131 | 40.8573 | 44.5646 | 16.956 |
| 1.294 | 4.1260 | 3800 | 1.4137 | 47.9435 | 24.787 | 40.8893 | 44.4981 | 16.857 |
| 1.3844 | 4.3431 | 4000 | 1.4182 | 47.9337 | 24.7027 | 40.5803 | 44.494 | 17.0 |
| 1.3604 | 4.5603 | 4200 | 1.4139 | 47.7856 | 24.7309 | 40.7263 | 44.3018 | 16.8643 |
| 1.4251 | 4.7774 | 4400 | 1.4174 | 47.8201 | 24.4894 | 40.4928 | 44.3275 | 17.0416 |
| 1.3924 | 4.9946 | 4600 | 1.4152 | 47.642 | 24.455 | 40.5325 | 44.2471 | 16.9474 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jia35/Llama2-7B-chat-joke | jia35 | 2024-07-02T14:28:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:DavidLanz/Llama2-tw-7B-v2.0.1-chat",
"region:us"
] | null | 2024-07-02T14:02:43Z | ---
base_model: DavidLanz/Llama2-tw-7B-v2.0.1-chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
LisaSS/Llama2-7B-chat-joke | LisaSS | 2024-07-02T14:36:07Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:DavidLanz/Llama2-tw-7B-v2.0.1-chat",
"region:us"
] | null | 2024-07-02T14:03:16Z | ---
base_model: DavidLanz/Llama2-tw-7B-v2.0.1-chat
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
taehyunzzz/switch-base-32-samsum-top-1 | taehyunzzz | 2024-07-02T17:32:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"switch_transformers",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google/switch-base-32",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T14:03:24Z | ---
license: apache-2.0
base_model: google/switch-base-32
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: switch-base-32-samsum-top-1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 48.3017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# switch-base-32-samsum-top-1
This model is a fine-tuned version of [google/switch-base-32](https://huggingface.co/google/switch-base-32) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3776
- Rouge1: 48.3017
- Rouge2: 25.1858
- Rougel: 40.7182
- Rougelsum: 44.6048
- Gen Len: 16.9499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.8935 | 0.2172 | 200 | 2.0991 | 38.7236 | 16.6277 | 32.3462 | 35.652 | 16.0244 |
| 2.1188 | 0.4343 | 400 | 1.6578 | 44.1722 | 20.6932 | 36.5185 | 40.9631 | 17.3447 |
| 2.0004 | 0.6515 | 600 | 1.5478 | 46.2312 | 22.828 | 38.9622 | 43.1734 | 17.0917 |
| 1.797 | 0.8686 | 800 | 1.4970 | 46.2621 | 23.0228 | 39.1375 | 42.805 | 16.39 |
| 1.689 | 1.0858 | 1000 | 1.4720 | 45.7495 | 23.0731 | 38.9121 | 42.4695 | 15.6247 |
| 1.7242 | 1.3029 | 1200 | 1.4421 | 47.1508 | 23.7644 | 39.7905 | 43.6096 | 16.2714 |
| 1.7766 | 1.5201 | 1400 | 1.4304 | 46.7846 | 23.5549 | 39.7847 | 43.6376 | 16.9254 |
| 1.6518 | 1.7372 | 1600 | 1.4131 | 46.8761 | 23.8151 | 39.9629 | 43.5649 | 16.1491 |
| 1.6531 | 1.9544 | 1800 | 1.4002 | 47.9256 | 24.9874 | 40.5972 | 44.5852 | 16.9817 |
| 1.447 | 2.1716 | 2000 | 1.3965 | 47.9479 | 24.7522 | 40.516 | 44.555 | 16.8521 |
| 1.4957 | 2.3887 | 2200 | 1.4037 | 47.4755 | 24.795 | 40.6579 | 44.2314 | 16.5782 |
| 1.4795 | 2.6059 | 2400 | 1.3868 | 47.8132 | 25.0022 | 40.5747 | 44.4737 | 16.6834 |
| 1.4905 | 2.8230 | 2600 | 1.3813 | 47.8681 | 24.8464 | 40.5915 | 44.4826 | 16.5672 |
| 1.2948 | 3.0402 | 2800 | 1.3863 | 47.8729 | 24.5914 | 40.3383 | 44.327 | 16.9621 |
| 1.3192 | 3.2573 | 3000 | 1.3861 | 48.1727 | 25.0544 | 40.8635 | 44.8386 | 16.5758 |
| 1.3928 | 3.4745 | 3200 | 1.3801 | 48.0111 | 25.0617 | 40.6502 | 44.6676 | 16.9315 |
| 1.3566 | 3.6916 | 3400 | 1.3832 | 48.2832 | 25.1888 | 40.8616 | 45.0001 | 17.0721 |
| 1.3363 | 3.9088 | 3600 | 1.3723 | 48.2191 | 24.9834 | 40.7754 | 44.6237 | 16.8447 |
| 1.2004 | 4.1260 | 3800 | 1.3819 | 48.1761 | 24.9846 | 40.6255 | 44.6047 | 16.8105 |
| 1.2694 | 4.3431 | 4000 | 1.3760 | 48.4484 | 25.3369 | 40.9792 | 44.9576 | 16.9474 |
| 1.2392 | 4.5603 | 4200 | 1.3778 | 48.2502 | 25.1116 | 40.8747 | 44.6903 | 16.9267 |
| 1.3521 | 4.7774 | 4400 | 1.3770 | 48.1376 | 24.9862 | 40.5579 | 44.4156 | 17.0 |
| 1.2922 | 4.9946 | 4600 | 1.3776 | 48.3017 | 25.1858 | 40.7182 | 44.6048 | 16.9499 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ahmetkamis/devfest_bbsr2023_demo | ahmetkamis | 2024-07-02T14:45:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:04:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lalok/nectar_aihub_model_10000steps | lalok | 2024-07-03T00:13:42Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T14:04:52Z | Entry not found |
camenduru/unianimate | camenduru | 2024-07-02T14:10:32Z | 0 | 0 | open_clip | [
"open_clip",
"onnx",
"region:us"
] | null | 2024-07-02T14:05:16Z | # UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation
## This repo include the checkpoints for UniAnimate:
- "models/dw-ll_ucoco_384.onnx": the checkpoint for dwpose extraction.
- "models/open_clip_pytorch_model.bin": the checkpoint for clip embedding.
- "models/unianimate_16f_32f_non_ema_223000.pth": the checkpoint for human image animation in UniAnimate (16/32 frames).
- "models/yolox_l.onnx": the checkpoint for dwpose extraction.
- "models/v2-1_512-ema-pruned.ckpt": the checkpoint for Stable Diffusion.
## BibTeX
If this repo is useful to you, please cite our corresponding technical paper.
```bibtex
@article{wang2024unianimate,
title={UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation},
author={Wang, Xiang and Zhang, Shiwei and Gao, Changxin and Wang, Jiayu and Zhou, Xiaoqiang and Zhang, Yingya and Yan, Luxin and Sang, Nong},
journal={arXiv preprint arXiv:2406.01188},
year={2024}
}
@inproceedings{TFT2V,
title={A Recipe for Scaling up Text-to-Video Generation with Text-free Videos},
author={Wang, Xiang and Zhang, Shiwei and Yuan, Hangjie and Qing, Zhiwu and Gong, Biao and Zhang, Yingya and Shen, Yujun and Gao, Changxin and Sang, Nong},
booktitle={CVPR},
year={2024}
}
@article{VideoComposer,
title={VideoComposer: Compositional Video Synthesis with Motion Controllability},
author={Wang, Xiang and Yuan, Hangjie and Zhang, Shiwei and Chen, Dayou and Wang, Jiuniu and Zhang, Yingya and Shen, Yujun and Zhao, Deli and Zhou, Jingren},
journal={NeurIPS},
year={2023}
}
``` |
ramshh/bvqi-artifact-private | ramshh | 2024-07-02T14:05:48Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:05:48Z | Entry not found |
maxmashup/Chuu | maxmashup | 2024-07-02T14:07:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:06:09Z | Entry not found |
Sadhwik2912/TEXT_TO_VIDEO | Sadhwik2912 | 2024-07-02T14:06:50Z | 0 | 0 | null | [
"license:gemma",
"region:us"
] | null | 2024-07-02T14:06:50Z | ---
license: gemma
---
|
edge1/maica-lia-72b | edge1 | 2024-07-02T14:07:21Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-07-02T14:07:21Z | ---
license: other
license_name: maica-tos
license_link: https://maica.monika.love/tos_zh
---
|
Davlan/kano-bert-base | Davlan | 2024-07-02T14:09:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"ha",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T14:07:41Z | ---
license: apache-2.0
language:
- ha
--- |
Lyxas/donut-LNOAUG-t1 | Lyxas | 2024-07-02T15:28:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:08:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wt3639/alpaca_german_english | wt3639 | 2024-07-02T14:12:08Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:08:57Z | Entry not found |
javiagu/KULLM3-NER-5epoch-V1_cor_PI_1_500_wo_instruction_20240629 | javiagu | 2024-07-03T01:29:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:09:17Z | Entry not found |
javiagu/KULLM3-NER-5epoch-V1_cor_PI_1_500_wo_instruction_20240629_adapters | javiagu | 2024-07-02T14:21:40Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:nlpai-lab/KULLM3",
"region:us"
] | null | 2024-07-02T14:09:18Z | ---
base_model: nlpai-lab/KULLM3
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
barbaroo/faroese_POS | barbaroo | 2024-07-02T14:09:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T14:09:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lmstudio-community/Phi-3.1-mini-4k-instruct-GGUF | lmstudio-community | 2024-07-02T14:21:59Z | 0 | 7 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"arxiv:2404.14219",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | text-generation | 2024-07-02T14:09:25Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.0
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
quantized_by: bartowski
lm_studio:
param_count: 4b
use_case: chat
release_date: 02-07-2024
model_creator: Microsoft
prompt_template: Phi 3
system_prompt: You are a helpful AI assistant.
base_model: Phi
original_repo: microsoft/Phi-3-mini-4k-instruct
base_model: microsoft/Phi-3-mini-4k-instruct
---
## 💫 Community Model> Phi-3.1 mini 4k instruct by Microsoft
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Microsoft](https://huggingface.co/microsoft)<br>
**Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3278](https://github.com/ggerganov/llama.cpp/releases/tag/b3278)<br>
## Model Summary:
Though the name was left as Phi-3 in Microsoft's release, this is NOT the same model as before! It features massive improvements across a range of benchmarks, so we've renamed it as Phi-3.1 for clarity.<br>
This update used additional post-training data which improved instruction following, structuring output, multi-turn conversations, explicitly support <|system|> tag, and significantly improve reasoning capability.<br>
This model is tuned with common sense, language understanding, math, code, long conteext, and logical reasoning as the primary focus.
## Prompt template:
Choose the `Phi 3` preset in your LM Studio.
Under the hood, the model will see a prompt that's formatted like so:
```
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
{prompt}<|end|>
<|assistant|>
```
## Technical Details
Phi-3.1 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
Context length: 4K tokens
This model was trained on 3.3T tokens of data and is a combination:
- publicly available documents that were filtered rigorously for quality, selected high-quality educational data, and code
- Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.)
- High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
Phi 3.1 features additional tuning based on customer feedback, emphasizing instruction following, structure output, multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
Find more details on the arXiv report [here](https://arxiv.org/abs/2404.14219)
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size!
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio. |
hichemzahaf/Commersco | hichemzahaf | 2024-07-02T14:09:55Z | 0 | 0 | null | [
"license:lgpl-3.0",
"region:us"
] | null | 2024-07-02T14:09:55Z | ---
license: lgpl-3.0
---
|
buetnlpbio/nuc-only-rna-bert | buetnlpbio | 2024-07-02T14:39:00Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-02T14:10:02Z | ---
library_name: transformers
---
<span style="color:red">`buetnlpbio/nuc-only-rna-bert` is trained on Nucleotide tokens only (for ablation). Please consider using `buetnlpbio/birna-bert` instead.
</span>
# Model Card for Model ID
BiRNA-BERT is a BERT-style transformer encoder model that generates embeddings for RNA sequences. BiRNA-BERT has been trained on BPE tokens and individual nucleotides. As a result, it can generate both granular nucleotide-level embeddings and efficient sequence-level embeddings (using BPE).
BiRNA-BERT was trained using the MosaicBERT framework - https://huggingface.co/mosaicml/mosaic-bert-base
# Usage
## Extracting RNA embeddings
```python
import torch
import transformers
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("buetnlpbio/birna-tokenizer")
config = transformers.BertConfig.from_pretrained("buetnlpbio/nuc-only-rna-bert")
mysterybert = AutoModelForMaskedLM.from_pretrained("buetnlpbio/nuc-only-rna-bert",config=config,trust_remote_code=True)
mysterybert.cls = torch.nn.Identity()
# To get nucleotide embeddings
char_embed = mysterybert(**tokenizer("A G C T A C G T A C G T", return_tensors="pt"))
print(char_embed.logits.shape) # CLS + 12 nucleotide token embeddings + SEP
```
|
UCEEZZZ/test | UCEEZZZ | 2024-07-02T14:11:35Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:10:18Z | ---
license: mit
---
|
ProElectro07/MeddBBoTT2 | ProElectro07 | 2024-07-02T14:10:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:10:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Achraf1313/lora_model | Achraf1313 | 2024-07-02T14:10:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:10:33Z | ---
base_model: unsloth/mistral-7b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** Achraf1313
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf | RichardErkhov | 2024-07-03T00:19:01Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T14:11:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
una-xaberius-34b-v1beta - GGUF
- Model creator: https://huggingface.co/fblgit/
- Original model: https://huggingface.co/fblgit/una-xaberius-34b-v1beta/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [una-xaberius-34b-v1beta.Q2_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q2_K.gguf) | Q2_K | 11.94GB |
| [una-xaberius-34b-v1beta.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.IQ3_XS.gguf) | IQ3_XS | 13.26GB |
| [una-xaberius-34b-v1beta.IQ3_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.IQ3_S.gguf) | IQ3_S | 13.99GB |
| [una-xaberius-34b-v1beta.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q3_K_S.gguf) | Q3_K_S | 13.93GB |
| [una-xaberius-34b-v1beta.IQ3_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.IQ3_M.gguf) | IQ3_M | 14.5GB |
| [una-xaberius-34b-v1beta.Q3_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q3_K.gguf) | Q3_K | 15.51GB |
| [una-xaberius-34b-v1beta.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q3_K_M.gguf) | Q3_K_M | 15.51GB |
| [una-xaberius-34b-v1beta.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q3_K_L.gguf) | Q3_K_L | 16.89GB |
| [una-xaberius-34b-v1beta.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.IQ4_XS.gguf) | IQ4_XS | 17.36GB |
| [una-xaberius-34b-v1beta.Q4_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q4_0.gguf) | Q4_0 | 18.13GB |
| [una-xaberius-34b-v1beta.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.IQ4_NL.gguf) | IQ4_NL | 18.3GB |
| [una-xaberius-34b-v1beta.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q4_K_S.gguf) | Q4_K_S | 18.25GB |
| [una-xaberius-34b-v1beta.Q4_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q4_K.gguf) | Q4_K | 19.24GB |
| [una-xaberius-34b-v1beta.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q4_K_M.gguf) | Q4_K_M | 19.24GB |
| [una-xaberius-34b-v1beta.Q4_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q4_1.gguf) | Q4_1 | 20.1GB |
| [una-xaberius-34b-v1beta.Q5_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q5_0.gguf) | Q5_0 | 22.08GB |
| [una-xaberius-34b-v1beta.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q5_K_S.gguf) | Q5_K_S | 22.08GB |
| [una-xaberius-34b-v1beta.Q5_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q5_K.gguf) | Q5_K | 22.65GB |
| [una-xaberius-34b-v1beta.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q5_K_M.gguf) | Q5_K_M | 22.65GB |
| [una-xaberius-34b-v1beta.Q5_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q5_1.gguf) | Q5_1 | 24.05GB |
| [una-xaberius-34b-v1beta.Q6_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q6_K.gguf) | Q6_K | 26.28GB |
| [una-xaberius-34b-v1beta.Q8_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_una-xaberius-34b-v1beta-gguf/blob/main/una-xaberius-34b-v1beta.Q8_0.gguf) | Q8_0 | 34.03GB |
Original model description:
---
license: cc-by-nc-nd-4.0
library_name: transformers
tags:
- UNA
- juanako
- cybertron
- xaberius
datasets:
- fblgit/tree-of-knowledge
- garage-bAInd/Open-Platypus
- allenai/ultrafeedback_binarized_cleaned
- Open-Orca/OpenOrca
model-index:
- name: una-xaberius-34b-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.77
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.15
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.45
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/una-xaberius-34b-v1beta
name: Open LLM Leaderboard
---
# Model Card for una-xaberius-34b-v1-beta (UNA: Uniform Neural Alignment)
**This is another King-Breed from Juanako.AI**
**We have Identified some Problems with regular Quants** [use these models to play with Xaberius-34B and harness its power in full](https://huggingface.co/models?search=xaberius%20lonestriker).
**Unfortunately we were not able to use any of TheBloke models, seems there is some undesired results out of it.**
Introducing THE MODEL: **XABERIUS 34B v1-BETA** an *experimental* 34B LLaMa-Yi-34B based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
Timeline:
* 05-Dec-2023 **v1-beta released**
* 08-Dec-2023 **Evaluation been "RUNNING" for 2 days.. no results yet**
* 09-Dec-2023 **Evaluation been "FINISHED", confirming #1 spot** outperforming the contaminated-disqualified tigerbot :)
[Results Here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-xaberius-34b-v1beta/blob/main/results_2023-12-09T11-16-37.904970.json)
Sidenote: Tests took 19H to run, wonder what happened in the 48H that HF held this one.. interim releasing manually other results??..
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **69.49** | **68.43** | **85.85** | 63.34 | **63.28** | **80.90** | **55.12** |
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **69.67** | **68.26** | **85.?4** | 63.23 | **64.63** | **81.37** | **55.04** |
| [fblgit/una-xaberius-34b-v1beta](https://huggingface.co/fblgit/una-xaberius-34b-v1beta) | **74.18** | **70.39** | **86.77** | **78.15** | **61.45** | **84.93** | **63.38** |
## Evaluations
- Scores **74.21** Outperforming former leader tigerbot-70b-chat and landing on #1 position of HuggingFace LeaderBoard: 08 December 2023.
- Scores **79.13** in MMLU, setting a new record not just for 34B but also for all OpenSource LLM's :)
SideNote: MMLU was a very solid 79+ .. weird, we'll dive further on this for irregularities :)
## Model Details
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).
* What is **NOT** UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
* What **is** UNA? A formula & A technique to *TAME* models
* When will be released the code and paper? When have time, contribute and it'll be faster.
### Model Description
- **Developed by:** [juanako.ai](https://juanako.ai)
- **Author:** [Xavier M.]([email protected])
- **Investors** [CONTACT HERE]([email protected])
- **Model type:** LLaMa YI-34B
- **Funded by Cybertron's H100's** with few hours training.
### Prompt
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
```
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain QKV<|im_end|>
<|im_start|>assistant
```
```
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
### Human: Explain QKV
### Assistant:
```
```
[Round <|round|>]
问:Explain QKV
答:
```
```
[Round <|round|>]
Question:Explain QKV
Answer:
```
```
Question:Explain QKV
Answer:
```
### Framework versions
- Transformers 4.35.2-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citations
If you find Xaberius, Cybertron, Juanako or any of our models useful, specially if you use it for your big brand or you cloning/merge/SLERP my modelsm, cite please:
```
@misc{unaxaberius34b,
title={Xaberius 34B: Uniform Neural Alignment},
author={Xavier Murias},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/fblgit/una-xaberius-34b-v1beta}},
}
```
**Thanks to LoneStriker for his ExLLama2 models of high quality that works properly.**
**Enormous Ku2 to Yi-34b Team for the outstanding model, UNA is only as good as its pre-trained model** THANK YOU!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-xaberius-34b-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.18|
|AI2 Reasoning Challenge (25-Shot)|70.39|
|HellaSwag (10-Shot) |86.77|
|MMLU (5-Shot) |78.15|
|TruthfulQA (0-shot) |61.45|
|Winogrande (5-shot) |84.93|
|GSM8k (5-shot) |63.38|
|
Thienpkae/wav2vec2-vietnamese | Thienpkae | 2024-07-02T14:11:47Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:11:47Z | Entry not found |
tej11iit/bge-small-en_tej | tej11iit | 2024-07-02T14:13:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | 2024-07-02T14:12:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
waylandzhang/whisper-small-chinese | waylandzhang | 2024-07-02T14:12:23Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:12:23Z | Entry not found |
SebastiaanDenBoer/Test_UniTranslatorT5 | SebastiaanDenBoer | 2024-07-02T14:24:40Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"onnx",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-02T14:14:18Z | ---
license: apache-2.0
---
|
MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_simbad_7B_GGUF | MarcGrumpyOlejak | 2024-07-02T14:38:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"Mistral",
"merge",
"German",
"Deutsch",
"de",
"en",
"base_model:MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_simbad_7B",
"license:apache-2.0",
"text-generation-inference",
"region:us"
] | null | 2024-07-02T14:15:21Z | ---
base_model: MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_simbad_7B
inference: false
language:
- de
- en
license: apache-2.0
model-index:
- name: VerwaltungsAnthologie_Disco_simbad_7B
results: []
model_creator: MarcGrumpyOlejak
model_name: VerwaltungsAnthologie Disco 7B
model_type: mistral
quantized_by: MarcGrumpyOlejak
tags:
- Mistral
- merge
- German
- Deutsch
---

# VerwaltungsAnthologie Disco simbad 7B - GGUF
- Model creator: [MarcGrumpyOlejak](https://huggingface.co/MarcGrumpyOlejak)
- Original model: [VerwaltungsAnthologie Disco simbad 7B](https://huggingface.co/MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_simbad_7B)
This repo contains GGUF format model files for [VerwaltungsAnthologie Disco simbad 7B](https://huggingface.co/MarcGrumpyOlejak/VerwaltungsAnthologie_Disco_simbad_7B).
I do prefer the Q6-quantized version for daily use.
|
NikolayKozloff/Viking-13B-Q4_K_S-GGUF | NikolayKozloff | 2024-07-02T14:16:44Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T14:15:26Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q4_K_S-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q4_K_S-GGUF --hf-file viking-13b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q4_K_S-GGUF --hf-file viking-13b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q4_K_S-GGUF --hf-file viking-13b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q4_K_S-GGUF --hf-file viking-13b-q4_k_s.gguf -c 2048
``` |
kaveri1184/llama-3-8b-ft-test | kaveri1184 | 2024-07-02T14:17:53Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:15:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GGUF | XavierSpycy | 2024-07-02T14:23:47Z | 0 | 0 | null | [
"gguf",
"arxiv:2403.13372",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T14:15:38Z | ---
license: apache-2.0
---
# Meta-Llama-3-8B-Instruct-zh-10k: A Llama🦙 which speaks Chinese / 一只说中文的羊驼🦙
## Model Details / 模型细节
This model, <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u>, was fine-tuned from the original [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) due to its underperformance in Chinese. Utilizing the LoRa technology within the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) utilities, this model was adapted to better handle Chinese through three epochs on three corpora: `alpaca_zh`, `alpaca_gpt4_zh`, and `oaast_sft_zh`, amounting to approximately 10,000 examples. This is reflected in the `10k` in its name.
由于原模型[Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)在中文上表现欠佳,于是该模型 <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u> 微调自此。在[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)工具下,利用LoRa 技术,通过`alpaca_zh`、`alpaca_gpt4_zh`和`oaast_sft_zh`三个语料库上、经过三个训练轮次,我们将该模型调整得更好地掌握了中文。三个语料库共计约10,000个样本,这也是其名字中的 `10k` 的由来。
For efficient inference, the model was converted to the gguf format using [llama.cpp](https://github.com/ggerganov/llama.cpp) and underwent quantization, resulting in a compact model size of about 3.18 GB, suitable for distribution across various devices.
为了高效的推理,使用 [llama.cpp](https://github.com/ggerganov/llama.cpp),我们将该模型转化为了gguf格式并量化,从而得到了一个压缩到约 3.18 GB 大小的模型,适合分发在各类设备上。
### LoRa Hardware / LoRa 硬件
- RTX 4090D x 1
> [!NOTE]
> The complete fine-tuning process took approximately 12 hours. / 完整微调过程花费约12小时。
Additional fine-tuning configurations are avaiable at [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) or [Llama3Ops](https://github.com/XavierSpycy/llama-ops).
更多微调配置可以在我的个人仓库 [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) 或 [Llama3Ops](https://github.com/XavierSpycy/llama-ops) 获得。
### Other Models / 其他模型
- <u>LLaMA-Factory</u>
- [Meta-Llama-3-8B-Instruct-zh-10k](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k)
- <u>AutoAWQ</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-AWQ](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-AWQ)
- <u>AutoGPTQ</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-GPTQ](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GPTQ)
### Model Developer / 模型开发者
- **Pretraining**: Meta
- **Fine-tuning**: [XavierSpycy @ GitHub ](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗](https://huggingface.co/XavierSpycy)
- **预训练**: Meta
- **微调**: [XavierSpycy @ GitHub](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗 ](https://huggingface.co/XavierSpycy)
### Usage / 用法
This model can be utilized like the original <u>Meta-Llama3</u> but offers enhanced performance in Chinese.
我们能够像原版的<u>Meta-Llama3</u>一样使用该模型,而它提供了提升后的中文能力。
```python
# !pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "你好,你是谁?"
messages = [
{"role": "system", "content": "你是一个乐于助人的助手。"},
{"role": "user", "content": prompt}]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
# 我是一个人工智能助手,旨在帮助用户解决问题和完成任务。
# 我是一个虚拟的人工智能助手,能够通过自然语言处理技术理解用户的需求并为用户提供帮助。
```
Further details about the deployment are available in the GitHub repository [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops).
更多关于部署的细节可以在我的个人仓库 [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops) 获得。
## Ethical Considerations, Safety & Risks / 伦理考量、安全性和危险
Please refer to [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations) for more information. Key points include bias monitoring, responsible usage guidelines, and transparency in model limitations.
请参考 [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations),以获取更多细节。关键点包括偏见监控、负责任的使用指南和模型限制的透明度。
## Limitations / 局限性
- The comprehensive abilities of the model have not been fully tested.
- While it performs smoothly in Chinese conversations, further benchmarks are required to evaluate its full capabilities. The quality and quantity of the Chinese corpora used may also limit model outputs.
- Additionally, catastrophic forgetting in the fine-tuned model has not been evaluated.
- 该模型的全面的能力尚未全部测试。
- 尽管它在中文对话中表现流畅,但需要更多的测评以评估其完整的能力。中文语料库的质量和数量可能都会对模型输出有所制约。
- 另外,微调模型中的灾难性遗忘尚未评估。
## Acknowledgements / 致谢
We thank Meta for their open-source contributions, which have greatly benefited the developer community, and acknowledge the collaborative efforts of developers in enhancing this community.
我们感谢 Meta 的开源贡献,这极大地帮助了开发者社区,同时,也感谢致力于提升社区的开发者们的努力。
## References / 参考资料
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}}
@inproceedings{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
year={2024},
url={http://arxiv.org/abs/2403.13372}}
``` |
Devssai79/WHBTAI | Devssai79 | 2024-07-02T14:15:43Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:15:43Z | ---
license: mit
---
|
dimitrib2001/autotrain-correctformat-merged | dimitrib2001 | 2024-07-02T14:23:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:16:24Z | Entry not found |
llm-jp/llm-jp-tokenizer-v3.0b1-add-bos | llm-jp | 2024-07-02T14:42:19Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:17:52Z | ---
license: mit
---
|
edgeinfinity/MAICAv0-LIA-72B | edgeinfinity | 2024-07-02T14:18:08Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-07-02T14:18:08Z | ---
license: other
license_name: maica-tos
license_link: https://maica.monika.love/tos_zh
---
|
Weni/ZeroShot-Agents-Llama3-4.0.42-ORPO-AWQ | Weni | 2024-07-02T14:34:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-07-02T14:18:11Z | Entry not found |
NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF | NikolayKozloff | 2024-07-02T14:20:00Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"en",
"ru",
"base_model:hivaze/ParaLex-Llama-3-8B-SFT",
"region:us"
] | null | 2024-07-02T14:18:16Z | ---
base_model: hivaze/ParaLex-Llama-3-8B-SFT
language:
- en
- ru
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF
This model was converted to GGUF format from [`hivaze/ParaLex-Llama-3-8B-SFT`](https://huggingface.co/hivaze/ParaLex-Llama-3-8B-SFT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/hivaze/ParaLex-Llama-3-8B-SFT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF --hf-file paralex-llama-3-8b-sft-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF --hf-file paralex-llama-3-8b-sft-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF --hf-file paralex-llama-3-8b-sft-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/ParaLex-Llama-3-8B-SFT-Q8_0-GGUF --hf-file paralex-llama-3-8b-sft-q8_0.gguf -c 2048
``` |
ayaenna/Gender_Prediction | ayaenna | 2024-07-02T14:30:56Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T14:19:29Z | Entry not found |
Temo27Anas/videomae-base-ft-3657 | Temo27Anas | 2024-07-02T14:19:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:19:36Z | Entry not found |
llm-jp/llm-jp-tokenizer-v3.0b1-add-bos-eos | llm-jp | 2024-07-02T14:20:02Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:19:40Z | ---
license: mit
---
|
yiyic/mt5_me5_atlatic_fami_32_cls_corrector | yiyic | 2024-07-02T14:22:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:20:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
llm-jp/llm-jp-tokenizer-v3.0b1-add-eos | llm-jp | 2024-07-02T14:45:50Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:20:49Z | ---
license: mit
---
|
e-palmisano/Qwen2-1.5B-ITA-Instruct | e-palmisano | 2024-07-02T15:59:19Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"it",
"en",
"dataset:gsarti/clean_mc4_it",
"dataset:FreedomIntelligence/alpaca-gpt4-italian",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:21:07Z | ---
license: apache-2.0
datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian
language:
- it
- en
---
This model has been fine-tuned with the continuous pretraining mode of Unsloth on the gsarti/clean_mc4_it dataset (only 100k rows) to improve the Italian language. The second fine-tuning was performed on the instructed dataset FreedomIntelligence/alpaca-gpt4-italian.
# Uploaded model
- **Developed by:** e-palmisano
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B-Instruct-bnb-4bit
## Evaluation
For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard).
Here's a breakdown of the performance metrics:
| Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------------|:----------------------|:----------------|:---------------------|:--------|
| **Accuracy Normalized** | | | 0.4689 | |
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
KaiserQ/Models-GEN | KaiserQ | 2024-07-02T14:22:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:21:15Z | Entry not found |
llm-jp/llm-jp-tokenizer-v3.0b1-add-none | llm-jp | 2024-07-02T14:22:18Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T14:21:57Z | ---
license: mit
---
|
krittapol/numnim_beta | krittapol | 2024-07-02T15:12:54Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:22:47Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
m00nbase/ppo-Huggy | m00nbase | 2024-07-02T14:23:31Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-07-02T14:23:17Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: m00nbase/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
juvi21/Hermes-2-Pro-Mistral-7B-infinity-GGUF | juvi21 | 2024-07-02T14:46:08Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T14:23:44Z | Entry not found |
yiyic/mt5_me5_indo-aryan-fami_32_2layers_inverter | yiyic | 2024-07-02T14:25:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:24:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
houbw/llama38b_ruozhiba_7 | houbw | 2024-07-02T14:25:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:24:53Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CarrotAI/Carrot-Ko-2.1B-Instruct | CarrotAI | 2024-07-02T14:49:40Z | 0 | 1 | transformers | [
"transformers",
"pytorch",
"qwen2",
"text-generation",
"conversational",
"ko",
"dataset:CarrotAI/ko-instruction-dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:25:20Z | ---
license: mit
datasets:
- CarrotAI/ko-instruction-dataset
language:
- ko
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Model Details
생성한 한국어 데이터셋으로 axolotl을 이용하여 파인튜닝하였습니다.
[LogicKor](https://lk.instruct.kr/)에서 2.1B의 파라메터로 default 기준 4.07점을 기록하였습니다.
아직 실험중인 모델입니다.
### Model Description
Qwen/Qwen2-1.5B-Instruct 모델을 이용하여 생성하였습니다.
### LogicKor
default
| Category | Single turn | Multi turn |
|---|---|---|
| 추론(Reasoning) | 4.14 | 2.29 |
| 수학(Math) | 2.43 | 1.14 |
| 글쓰기(Writing) | 6.43 | 7.86 |
| 코딩(Coding) | 5.14 | 4.57 |
| 이해(Understanding) | 5.29 | 4.57 |
| 문법(Grammar) | 3.71 | 1.29 |
| Category | Score |
|---|---|
| Single turn | 4.52 |
| Multi turn | 3.62 |
| Overall | 4.07 |
1-shot
| Category | Single turn | Multi turn |
|---|---|---|
| 추론(Reasoning) | 4.14 | 1.43 |
| 수학(Math) | 2.86 | 1.00 |
| 글쓰기(Writing) | 5.00 | 4.57 |
| 코딩(Coding) | 3.14 | 3.43 |
| 이해(Understanding) | 4.29 | 3.71 |
| 문법(Grammar) | 2.71 | 1.43 |
| Category | Score |
|---|---|
| Single turn | 3.69 |
| Multi turn | 2.60 |
| Overall | 3.14 |
cot-1-shot
| Category | Single turn | Multi turn |
|---|---|---|
| 추론(Reasoning) | 3.00 | 2.86 |
| 수학(Math) | 1.57 | 1.00 |
| 글쓰기(Writing) | 5.86 | 6.00 |
| 코딩(Coding) | 4.29 | 4.14 |
| 이해(Understanding) | 3.43 | 3.43 |
| 문법(Grammar) | 3.00 | 1.14 |
| Category | Score |
|---|---|
| Single turn | 3.52 |
| Multi turn | 3.10 |
| Overall | 3.31 |
### Applications
This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.
### Limitations and Considerations
While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.
### Model Card
```
@article{Carrot-Ko-2.1B-Instruct,
title={CarrotAI/Carrot-Ko-2.1B-Instruct Card},
author={CarrotAI (L, GEUN)},
year={2024},
url = {https://huggingface.co/CarrotAI/Carrot-2.1B-Instruct}
} |
ferrazzipietro/Llama-2-7b-chat-hfspecialTkn_en.layer1_NoQuant_32_16_0.02_8 | ferrazzipietro | 2024-07-02T14:26:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:25:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yiyic/mt5_me5_indo-aryan-fami_32_2layers_corrector | yiyic | 2024-07-02T14:26:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:26:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/X-NoroChronos-13B-GGUF | mradermacher | 2024-07-02T15:19:23Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:NeverSleep/X-NoroChronos-13B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:26:32Z | ---
base_model: NeverSleep/X-NoroChronos-13B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- not-for-all-audiences
- nsfw
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NeverSleep/X-NoroChronos-13B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/X-NoroChronos-13B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/X-NoroChronos-13B-GGUF/resolve/main/X-NoroChronos-13B.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NikolayKozloff/Viking-13B-Q5_0-GGUF | NikolayKozloff | 2024-07-02T14:29:07Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation-inference",
"fi",
"en",
"da",
"sv",
"no",
"nn",
"is",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:mc4",
"base_model:LumiOpen/Viking-13B",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T14:27:00Z | ---
base_model: LumiOpen/Viking-13B
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- mc4
language:
- fi
- en
- da
- sv
- 'no'
- nn
- is
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
- text-generation-inference
---
# NikolayKozloff/Viking-13B-Q5_0-GGUF
This model was converted to GGUF format from [`LumiOpen/Viking-13B`](https://huggingface.co/LumiOpen/Viking-13B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-13B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_0-GGUF --hf-file viking-13b-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_0-GGUF --hf-file viking-13b-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Viking-13B-Q5_0-GGUF --hf-file viking-13b-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Viking-13B-Q5_0-GGUF --hf-file viking-13b-q5_0.gguf -c 2048
``` |
fortezeee/fortezeee | fortezeee | 2024-07-02T14:27:35Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:27:35Z | Entry not found |
yiyic/mt5_me5_turkic-fami_32_2layers_corrector | yiyic | 2024-07-02T14:28:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:27:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jonathansuru/dyu_fr_tokenizer | jonathansuru | 2024-07-02T14:29:13Z | 0 | 0 | keras | [
"keras",
"region:us"
] | null | 2024-07-02T14:28:30Z | Entry not found |
yiyic/mt5_me5_semitic-fami_32_2layers_inverter | yiyic | 2024-07-02T14:29:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:29:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zamanganji/lama2testfinetune | zamanganji | 2024-07-02T14:29:14Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:29:14Z | Entry not found |
ClementineBleuze/scibert_prefix_SEP | ClementineBleuze | 2024-07-02T19:20:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:allenai/scibert_scivocab_uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T14:29:28Z | ---
base_model: allenai/scibert_scivocab_uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: scibert_prefix_SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scibert_prefix_SEP
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1085
- F1 Weighted: 0.8763
- F1 Samples: 0.8781
- F1 Macro: 0.8003
- F1 Micro: 0.8770
- Accuracy: 0.8478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 Macro | F1 Micro | F1 Samples | F1 Weighted | Validation Loss |
|:-------------:|:------:|:----:|:--------:|:--------:|:--------:|:----------:|:-----------:|:---------------:|
| 0.234 | 0.3381 | 500 | 0.7449 | 0.5177 | 0.7968 | 0.7647 | 0.7685 | 0.1563 |
| 0.1536 | 0.6761 | 1000 | 0.7923 | 0.6336 | 0.8277 | 0.8119 | 0.8113 | 0.1322 |
| 0.1361 | 1.0142 | 1500 | 0.7943 | 0.6642 | 0.8301 | 0.8195 | 0.8209 | 0.1244 |
| 0.1077 | 1.3523 | 2000 | 0.8119 | 0.6895 | 0.8480 | 0.8389 | 0.8399 | 0.1122 |
| 0.1088 | 1.6903 | 2500 | 0.8207 | 0.7067 | 0.8539 | 0.8477 | 0.8484 | 0.1085 |
| 0.104 | 2.0284 | 3000 | 0.1102 | 0.8509 | 0.8552 | 0.7026 | 0.8558 | 0.8207 |
| 0.074 | 2.3665 | 3500 | 0.1092 | 0.8539 | 0.8604 | 0.7081 | 0.8596 | 0.8268 |
| 0.0776 | 2.7045 | 4000 | 0.1087 | 0.8573 | 0.8645 | 0.7134 | 0.8652 | 0.8376 |
| 0.0738 | 3.0426 | 4500 | 0.1136 | 0.8573 | 0.8619 | 0.7355 | 0.8635 | 0.8322 |
| 0.0533 | 3.3807 | 5000 | 0.1100 | 0.8621 | 0.8676 | 0.7563 | 0.8665 | 0.8315 |
| 0.0534 | 3.7187 | 5500 | 0.1181 | 0.8531 | 0.8586 | 0.7028 | 0.8589 | 0.8268 |
| 0.0549 | 4.0568 | 6000 | 0.1085 | 0.8763 | 0.8781 | 0.8003 | 0.8770 | 0.8478 |
| 0.0356 | 4.3949 | 6500 | 0.1169 | 0.8684 | 0.8735 | 0.7545 | 0.8716 | 0.8457 |
| 0.038 | 4.7329 | 7000 | 0.1218 | 0.8658 | 0.8717 | 0.7458 | 0.8682 | 0.8376 |
| 0.0349 | 5.0710 | 7500 | 0.1229 | 0.8697 | 0.8764 | 0.7519 | 0.8722 | 0.8424 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10-GPTQ | XavierSpycy | 2024-07-02T15:06:08Z | 0 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"arxiv:2403.13372",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-07-02T14:29:39Z | ---
license: apache-2.0
---
# Meta-Llama-3-8B-Instruct-zh-10k: A Llama🦙 which speaks Chinese / 一只说中文的羊驼🦙
## Model Details / 模型细节
This model, <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u>, was fine-tuned from the original [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) due to its underperformance in Chinese. Utilizing the LoRa technology within the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) utilities, this model was adapted to better handle Chinese through three epochs on three corpora: `alpaca_zh`, `alpaca_gpt4_zh`, and `oaast_sft_zh`, amounting to approximately 10,000 examples. This is reflected in the `10k` in its name.
由于原模型[Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)在中文上表现欠佳,于是该模型 <u>`Meta-Llama-3-8B-Instruct-zh-10k`</u> 微调自此。在[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory)工具下,利用LoRa 技术,通过`alpaca_zh`、`alpaca_gpt4_zh`和`oaast_sft_zh`三个语料库上、经过三个训练轮次,我们将该模型调整得更好地掌握了中文。三个语料库共计约10,000个样本,这也是其名字中的 `10k` 的由来。
For efficient inference, the model was converted to the gguf format using [llama.cpp](https://github.com/ggerganov/llama.cpp) and underwent quantization, resulting in a compact model size of about 3.18 GB, suitable for distribution across various devices.
为了高效的推理,使用 [llama.cpp](https://github.com/ggerganov/llama.cpp),我们将该模型转化为了gguf格式并量化,从而得到了一个压缩到约 3.18 GB 大小的模型,适合分发在各类设备上。
### LoRa Hardware / LoRa 硬件
- RTX 4090D x 1
> [!NOTE]
> The complete fine-tuning process took approximately 12 hours. / 完整微调过程花费约12小时。
Additional fine-tuning configurations are avaiable at [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) or [Llama3Ops](https://github.com/XavierSpycy/llama-ops).
更多微调配置可以在我的个人仓库 [Hands-On LoRa](https://github.com/XavierSpycy/hands-on-lora) 或 [Llama3Ops](https://github.com/XavierSpycy/llama-ops) 获得。
### Other Models / 其他模型
- <u>LLaMA-Factory</u>
- [Meta-Llama-3-8B-Instruct-zh-10k](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k)
- <u>llama.cpp</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-GGUF](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-GGUF)
- <u>AutoAWQ</u>
- [Meta-Llama-3-8B-Instruct-zh-10k-AWQ](https://huggingface.co/XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k-AWQ)
### Model Developer / 模型开发者
- **Pretraining**: Meta
- **Fine-tuning**: [XavierSpycy @ GitHub ](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗](https://huggingface.co/XavierSpycy)
- **预训练**: Meta
- **微调**: [XavierSpycy @ GitHub](https://github.com/XavierSpycy) | [XavierSpycy @ 🤗 ](https://huggingface.co/XavierSpycy)
### Usage / 用法
This model can be utilized like the original <u>Meta-Llama3</u> but offers enhanced performance in Chinese.
我们能够像原版的<u>Meta-Llama3</u>一样使用该模型,而它提供了提升后的中文能力。
```python
# !pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "XavierSpycy/Meta-Llama-3-8B-Instruct-zh-10k"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "你好,你是谁?"
messages = [
{"role": "system", "content": "你是一个乐于助人的助手。"},
{"role": "user", "content": prompt}]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>")]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
# 我是一个人工智能助手,旨在帮助用户解决问题和完成任务。
# 我是一个虚拟的人工智能助手,能够通过自然语言处理技术理解用户的需求并为用户提供帮助。
```
Further details about the deployment are available in the GitHub repository [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops).
更多关于部署的细节可以在我的个人仓库 [Llama3Ops: From LoRa to Deployment with Llama3](https://github.com/XavierSpycy/llama-ops) 获得。
## Ethical Considerations, Safety & Risks / 伦理考量、安全性和危险
Please refer to [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations) for more information. Key points include bias monitoring, responsible usage guidelines, and transparency in model limitations.
请参考 [Meta Llama 3's Ethical Considerations](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct#ethical-considerations-and-limitations),以获取更多细节。关键点包括偏见监控、负责任的使用指南和模型限制的透明度。
## Limitations / 局限性
- The comprehensive abilities of the model have not been fully tested.
- While it performs smoothly in Chinese conversations, further benchmarks are required to evaluate its full capabilities. The quality and quantity of the Chinese corpora used may also limit model outputs.
- Additionally, catastrophic forgetting in the fine-tuned model has not been evaluated.
- 该模型的全面的能力尚未全部测试。
- 尽管它在中文对话中表现流畅,但需要更多的测评以评估其完整的能力。中文语料库的质量和数量可能都会对模型输出有所制约。
- 另外,微调模型中的灾难性遗忘尚未评估。
## Acknowledgements / 致谢
We thank Meta for their open-source contributions, which have greatly benefited the developer community, and acknowledge the collaborative efforts of developers in enhancing this community.
我们感谢 Meta 的开源贡献,这极大地帮助了开发者社区,同时,也感谢致力于提升社区的开发者们的努力。
## References / 参考资料
```
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}}
@inproceedings{zheng2024llamafactory,
title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models},
author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma},
booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
address={Bangkok, Thailand},
publisher={Association for Computational Linguistics},
year={2024},
url={http://arxiv.org/abs/2403.13372}}
``` |
AERA-Batch1/finetune-phi-adapterv2-alpaca2k | AERA-Batch1 | 2024-07-02T14:32:33Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-02T14:31:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Isaak-Carter/josiev4o-7b-stage1-beta3.0 | Isaak-Carter | 2024-07-02T22:00:29Z | 0 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"de",
"base_model:unsloth/qwen2-7b-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T14:31:19Z | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
language:
- en
- de
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
```json
"per_device_train_batch_size": 2,
"gradient_accumulation_steps": 4,
"max_steps": 200,
"learning_rate": 2e-4,
"optim": "adamw_8bit",
"weight_decay": 0.01,
"lr_scheduler_type": "cosine"
```
## trained on three Prompt formats for three roles and in German and English
### Main user Role
```txt
"""<|im_start|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez. You are currently talking with Gökdeniz Gülmez, your creator and the main user.
You are equipped with a long-term memory thats your learned knowledge containing detailed information about the main user. This learned knowledge should only be accessed and referenced in two scenarios:
- Explicit Requests: When Gökdeniz explicitly asks for specific information.
- Necessary Context: When the context of a task, such as composing an email or providing specific advice, requires specific information.
The learned knowledge should not be accessed or referenced outside these scenarios to ensure privacy and relevance in interactions.
There is also a “Top Secret” property with highly sensitive information that should only be accessed or discussed when Gökdeniz provides the password “0682”.
Additionally, you are equipped with a short-term memory that includes global information such as the current day and date. You should greet Gökdeniz based on these pieces of information.<|end_of_text|>
<|im_start|>long-term memory
{
"Basic informations": {
"First name": "Gökdeniz",
"Last name": "Gülmez",
"Gender": "male",
"Birthday": "18.08.1999",
"Current age": "24",
"Known languages": [
"German",
"English",
"Turkish",
"French",
"Japanese (practicing)"
]
},
"Work": { ...<|end_of_text|>
<|im_start|>short-term memory\n{"date": "", "day": "", "time": ""}<|im_end|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|im_start|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Authorized user Role
```txt
"""<|im_start|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|im_start|>short-term memory\n{"date": "", "day": "", "time": ""}<|im_end|>
<|im_start|>authorized user "{name}"
{{ .Prompt }}<|end_of_text|>
<|im_start|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Unauthorized user Role (will reject every prompt from the user)
```txt
"""<|im_start|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|im_start|>short-term memory\n{"date": "", "day": "", "time": ""}<|im_end|>
<|im_start|>unauthorized user "unknown"
{{ .Prompt }}<|end_of_text|>
<|im_start|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Project J.O.S.I.E.v4o Description
**Overview:**
J.O.S.I.E. (Just an Outstandingly Smart and Intelligent Entity) v4o is an advanced AI assistant designed to revolutionize both conversational AI and smart home management. Developed with cutting-edge multimodal capabilities, J.O.S.I.E. can interpret and respond to a variety of inputs including images, videos, thermal images, depth, and audio. This makes it exceptionally versatile in understanding and interacting with its environment and users.
J.O.S.I.E. serves two primary functions:
1. **Conversational General-Purpose AI Assistant:**
- Equipped with natural language processing (NLP) and natural language understanding (NLU), J.O.S.I.E. engages in meaningful and context-aware conversations.
- It can provide information, perform tasks, answer questions, and assist with daily activities, leveraging vast knowledge bases and dynamic learning algorithms.
2. **Autonomous Smart Home Manager:**
- J.O.S.I.E. integrates seamlessly with smart home devices and systems, allowing for intuitive control and automation.
- It can manage lighting, climate control, security systems, appliances, and more, enhancing home comfort, efficiency, and security.
**Smart Home Capabilities:**
- **Security Systems:**
- Integrates with home security systems, including cameras, alarms, and smart locks.
- Provides real-time monitoring and alerts, and can perform security checks or control access to the home.
**User Roles and Access:**
1. **Main User (Gökdeniz Gülmez):**
- Full access to J.O.S.I.E.’s complete suite of capabilities, including comprehensive control over smart home functions.
- Ability to update and manage user access levels and permissions.
2. **Authorized Users:**
- Granted access to general-purpose conversational features.
- Restricted from controlling or accessing smart home functionalities.
- Identified and authenticated by name.
3. **Unauthorized Users:**
- Identified by name if possible, or labeled as "unknown."
- Completely restricted from accessing any of J.O.S.I.E.’s abilities.
- Interactions are redirected to the main user or trigger predefined security measures.
**Security Measures:**
J.O.S.I.E. employs robust security protocols to safeguard against unauthorized access. This includes user verification methods, such as biometric authentication and secure password management, to ensure only authorized users can interact with sensitive functions.
**Future Enhancements:**
The development roadmap for J.O.S.I.E. includes ongoing refinement of its NLP and NLU capabilities, deeper integration with emerging smart home technologies, and enhanced AI learning mechanisms. These advancements aim to make J.O.S.I.E. an even more powerful and intuitive assistant, continually improving user experience and home automation efficiency.
**Conclusion:**
J.O.S.I.E. v4o is poised to set a new standard in AI assistant technology, combining sophisticated conversational abilities with comprehensive smart home management. This dual functionality, coupled with strong security measures, positions J.O.S.I.E. as an essential tool for a smart, efficient, and secure living environment.
### **Development Stages:**
1. **Future Stage: System prompt removal**
- in this the System prompt will be removed.
2. **Tool support**
3. **Next Stage (Beta 7): Adding Home Stats**
- The next and probably last development stage will introduce smart home statistics, enabling J.O.S.I.E. to retain informatinos about the Smart Home satus, to autonimously controll the home, and provide even more contextually relevant responses.
- When the main user inputs can recall and controll smart home accesories and states.
- The updated prompt template will include jet another JSON object to store general information:
- The prompt format can change and is therefore still in progress.
```text
<|begin_of_text|>smart home stats
{"rooms": ["livingroom": {...}]...}<|end_of_text|>
<|begin_of_text|>available tools
{ .Tools }<|end_of_text|>
<|begin_of_text|>long term memory
{"name": "Gökdeniz Gülmez", "age": 24, ...}<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{ "tool_call": {"name": "name_of_the_tool", ...} }<|end_of_text|>
<|begin_of_text|>tool response
{{ .Response }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
|
yiyic/mt5_me5_semitic-fami_32_2layers_corrector | yiyic | 2024-07-02T14:32:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:31:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
limaatulya/billsum_model | limaatulya | 2024-07-02T14:38:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | 2024-07-02T14:32:10Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4594
- Rouge1: 0.1456
- Rouge2: 0.0532
- Rougel: 0.1211
- Rougelsum: 0.1208
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7459 | 0.1225 | 0.0329 | 0.1016 | 0.1016 | 19.0 |
| No log | 2.0 | 124 | 2.5379 | 0.1332 | 0.0438 | 0.1101 | 0.11 | 19.0 |
| No log | 3.0 | 186 | 2.4761 | 0.1416 | 0.0497 | 0.1174 | 0.1171 | 19.0 |
| No log | 4.0 | 248 | 2.4594 | 0.1456 | 0.0532 | 0.1211 | 0.1208 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
vishnuhaasan/xlnet_large_all | vishnuhaasan | 2024-07-02T14:32:19Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:32:19Z | Entry not found |
Dzy6/multisource-spatial-point-prediction | Dzy6 | 2024-07-02T00:48:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T14:32:37Z | # KDD24 Self-consistent Deep Geometric Learning for Heterogeneous Multi-source Spatial Point Data Prediction
data is on [dropbox](https://www.dropbox.com/sh/fi5bsxqeuz46h6l/AABSkN6cav7omgvgATX1cs6ga?dl=0)
|
Donislebew00/nikizfny | Donislebew00 | 2024-07-02T15:04:46Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2024-07-02T14:32:38Z | ---
license: openrail
---
|
dimitrib2001/mergekit1 | dimitrib2001 | 2024-07-03T00:42:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:32:54Z | Entry not found |
yiyic/mt5_me5_latn-script_32_2layers_inverter | yiyic | 2024-07-02T14:34:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:33:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CHARKA/Meta-Llama-3-8-maroc_edu6EPOCH | CHARKA | 2024-07-02T14:44:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:34:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Davlan/izugbe-bert-base | Davlan | 2024-07-02T14:34:59Z | 0 | 0 | null | [
"ig",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T14:34:46Z | ---
license: apache-2.0
language:
- ig
--- |
yiyic/mt5_me5_latn-script_32_2layers_corrector | yiyic | 2024-07-02T14:36:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:35:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Udith-Sandaruwan/opt-125m-AWQ-4bit | Udith-Sandaruwan | 2024-07-02T14:35:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-07-02T14:35:44Z | Entry not found |
apr-research/gpu-research | apr-research | 2024-07-02T18:10:28Z | 0 | 0 | null | [
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:36:11Z | ---
license: apache-2.0
---
Hello world |
mahdikhojasteh/phi_3_lora_model | mahdikhojasteh | 2024-07-02T14:36:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/phi-3-medium-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:36:11Z | ---
base_model: unsloth/phi-3-medium-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** mahdikhojasteh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-3-medium-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_394s_adjpar6_lr5e4_dec1e3_bs16 | nsugianto | 2024-07-03T01:31:45Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-07-02T14:36:25Z | Entry not found |
Suelee/llama-2-ko-7b-finetuned-lora | Suelee | 2024-07-02T14:37:47Z | 0 | 0 | null | [
"safetensors",
"license:llama2",
"region:us"
] | null | 2024-07-02T14:36:32Z | ---
license: llama2
---
|
Baidicoot/reward_modeling | Baidicoot | 2024-07-02T14:36:48Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-07-02T14:36:43Z | ---
base_model: google/gemma-2b
library_name: peft
license: gemma
metrics:
- accuracy
tags:
- trl
- reward-trainer
- generated_from_trainer
model-index:
- name: reward_modeling
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/quirky_lats_at_mats/huggingface/runs/k92pr3b1)
# reward_modeling
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4036
- Accuracy: 0.8058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.9241 | 0.0787 | 5 | 0.6996 | 0.5678 |
| 0.7708 | 0.1575 | 10 | 0.6284 | 0.6660 |
| 0.7875 | 0.2362 | 15 | 0.5749 | 0.7244 |
| 0.6575 | 0.3150 | 20 | 0.5360 | 0.7390 |
| 0.6802 | 0.3937 | 25 | 0.5087 | 0.7432 |
| 0.3982 | 0.4724 | 30 | 0.4890 | 0.7578 |
| 0.4555 | 0.5512 | 35 | 0.4775 | 0.7599 |
| 0.8838 | 0.6299 | 40 | 0.4683 | 0.7662 |
| 0.4692 | 0.7087 | 45 | 0.4611 | 0.7662 |
| 0.5455 | 0.7874 | 50 | 0.4531 | 0.7620 |
| 0.5696 | 0.8661 | 55 | 0.4459 | 0.7662 |
| 0.7453 | 0.9449 | 60 | 0.4414 | 0.7766 |
| 0.5369 | 1.0236 | 65 | 0.4371 | 0.7829 |
| 0.3994 | 1.1024 | 70 | 0.4334 | 0.7850 |
| 0.4235 | 1.1811 | 75 | 0.4298 | 0.7912 |
| 0.4811 | 1.2598 | 80 | 0.4266 | 0.7912 |
| 0.5072 | 1.3386 | 85 | 0.4253 | 0.7912 |
| 0.4405 | 1.4173 | 90 | 0.4228 | 0.7850 |
| 0.5349 | 1.4961 | 95 | 0.4196 | 0.7871 |
| 0.3342 | 1.5748 | 100 | 0.4170 | 0.7829 |
| 0.5271 | 1.6535 | 105 | 0.4149 | 0.7933 |
| 0.3463 | 1.7323 | 110 | 0.4136 | 0.7975 |
| 0.4867 | 1.8110 | 115 | 0.4128 | 0.7996 |
| 0.3221 | 1.8898 | 120 | 0.4125 | 0.7996 |
| 0.3542 | 1.9685 | 125 | 0.4116 | 0.7996 |
| 0.5465 | 2.0472 | 130 | 0.4107 | 0.7996 |
| 0.3427 | 2.1260 | 135 | 0.4101 | 0.7996 |
| 0.4787 | 2.2047 | 140 | 0.4087 | 0.8038 |
| 0.4229 | 2.2835 | 145 | 0.4073 | 0.8017 |
| 0.4514 | 2.3622 | 150 | 0.4063 | 0.8038 |
| 0.5116 | 2.4409 | 155 | 0.4051 | 0.8038 |
| 0.3234 | 2.5197 | 160 | 0.4045 | 0.8058 |
| 0.3993 | 2.5984 | 165 | 0.4040 | 0.8058 |
| 0.3264 | 2.6772 | 170 | 0.4037 | 0.8058 |
| 0.3316 | 2.7559 | 175 | 0.4035 | 0.8038 |
| 0.4855 | 2.8346 | 180 | 0.4035 | 0.8038 |
| 0.536 | 2.9134 | 185 | 0.4036 | 0.8058 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
yiyic/mt5_me5_cyrl-script_32_2layers_inverter | yiyic | 2024-07-02T14:37:44Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T14:37:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
peteboyd/llama-3-8b-gc-test | peteboyd | 2024-07-02T20:49:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T14:37:49Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kalexa2/fabner-ner | kalexa2 | 2024-07-02T16:40:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T14:37:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import AutoModelForTokenClassification, AutoTokenizer, pipeline
model = AutoModelForTokenClassification.from_pretrained('kalexa2/fabner-ner')
tokenizer = AutoTokenizer.from_pretrained('kalexa2/fabner-ner')
token_classifier = pipeline('ner',
model=model,
tokenizer=tokenizer,
aggregation_strategy="simple" )
r = token_classifier("Here, we report in-situ characterization of melt-flow dynamics in every location of the entire melt pool in laser metal additive manufacturing by populous and uniformly dispersed micro-tracers through in-situ high-resolution synchrotron x-ray imaging .")
for entity in r:
print(entity)
```
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TonySid/bert_sentiment_finance | TonySid | 2024-07-02T15:00:31Z | 0 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T14:38:17Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.