File size: 3,415 Bytes
2dedfb1
dbccb99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2dedfb1
dbccb99
2dedfb1
dbccb99
 
 
 
 
2dedfb1
dbccb99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2dedfb1
dbccb99
 
 
 
 
 
 
 
22712e5
 
 
dbccb99
 
 
 
 
 
 
c4f7e40
dbccb99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2dedfb1
dbccb99
 
 
 
 
 
8b95a3b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120

---
license: apache-2.0
language:
  - en
library_name: llama.cpp
tags:
  - gguf
  - quantized
  - int8
  - offline-ai
  - local-llm
  - chatnonet
model_type: causal
inference: true
pipeline_tag: text-generation
---

# NONET

**NONET** is a family of **offline**, quantized large language models fine-tuned for **question answering** with **direct, concise answers**. Designed for local execution using `llama.cpp`, NONET is available in multiple sizes and optimized for Android or Python-based environments.

## Model Details

### Model Description

NONET is intended for lightweight offline use, particularly on local devices like mobile phones or single-board computers. The models have been **fine-tuned for direct-answer QA** and quantized to **int8 (q8_0)** using `llama.cpp`.

| Model Name                        | Base Model         | Size   |
|----------------------------------|--------------------|--------|
| ChatNONET-135m-tuned-q8_0.gguf   | Smollm             | 135M   |
| ChatNONET-300m-tuned-q8_0.gguf   | Smollm             | 300M   |
| ChatNONET-1B-tuned-q8_0.gguf     | LLaMA 3.2          | 1B     |
| ChatNONET-3B-tuned-q8_0.gguf     | LLaMA 3.2          | 3B     |

- **Developed by:** McaTech (Michael Cobol Agan)
- **Model type:** Causal decoder-only transformer
- **Languages:** English
- **License:** Apache 2.0
- **Finetuned from:**
  - Smollm (135M, 300M variants)
  - LLaMA 3.2 (1B, 3B variants)

## Uses

### Direct Use

- Offline QA chatbot
- Local assistants (no internet required)
- Embedded Android or Python apps

### Out-of-Scope Use

- Long-form text generation
- Tasks requiring real-time web access
- Creative storytelling or coding tasks

## Bias, Risks, and Limitations

NONET may reproduce biases present in its base models or fine-tuning data. Outputs should not be relied upon for sensitive or critical decisions.

### Recommendations

- Validate important responses
- Choose model size based on your device capability
- Avoid over-reliance for personal or legal advice

## How to Get Started with the Model
### For Android Devices
- Try the **Android app**: [Download ChatNONET APK](https://drive.google.com/file/d/1-5Ozx_VsOUBS5_b4yS40MCaNZge_5_1f/view?usp=sharing)
### You can also build llama.cpp your own and run it 
```bash
# Clone llama.cpp and build it
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

# Run the model
./llama-cli -m ./ChatNONET-300m-tuned-q8_0.gguf -p "You are ChatNONET AI assistant." -cnv
````

## Training Details

* **Finetuning Goal:** Direct-answer question answering
* **Precision:** FP16 mixed precision
* **Frameworks:** PyTorch, Transformers, Bitsandbytes
* **Quantization:** int8 GGUF (`q8_0`) via `llama.cpp`

## Evaluation

* Evaluated internally on short QA prompts
* Capable of direct factual or logical answers
* Larger models perform better on reasoning tasks

## Technical Specifications

* **Architecture:**

  * Smollm (135M, 300M)
  * LLaMA 3.2 (1B, 3B)
* **Format:** GGUF
* **Quantization:** q8\_0 (int8)
* **Deployment:** Mobile (Android) and desktop via `llama.cpp`

## Citation

```bibtex
@misc{chatnonet2025,
  title={ChatNONET: Offline Quantized Q&A Models},
  author={Michael Cobol Agan},
  year={2025},
  note={\url{https://huggingface.co/McaTech/Nonet}},
}
```

## Contact

* **Author:** Michael Cobol Agan (McaTech)
* **Facebook:** [FB Profile](https://www.facebook.com/michael.cobol.agan.2025/)