|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
tags: |
|
- hawk |
|
pipeline_tag: text-generation |
|
--- |
|
# HawkLM-demo |
|
|
|
<p align="center"> |
|
<a href="https://huggingface.co/Rexopia/HawkLM-demo">HawkLM-demo 🤗</a>  | <a href="https://huggingface.co/Rexopia/HawkLM-Chat-demo">HawkLM-Chat-demo 🤗</a> |
|
</p> |
|
|
|
## Model Details |
|
|
|
- **Developed by:** Rexopia |
|
- **Reach me:** [email protected] |
|
- **Language(s):** English |
|
- **License:** Apache license 2.0 |
|
- **Pretrained model:** True |
|
- **Demo version:** True |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("Rexopia/HawkLM-demo", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("Rexopia/HawkLM-demo", device_map="auto", trust_remote_code=True) |
|
``` |
|
|
|
## Training Data |
|
|
|
We sampled English-only corpus from Redpajama-1T datasets without any Arxiv and GitHub tags. As the demo version presented, we only trained 3.3Bil tokens. |
|
|
|
## Evaluation |
|
|
|
[More Information Needed] |
|
|
|
## Citation |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] |
|
|
|
|
|
|