Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
ParScale
community
AI & ML interests
None defined yet.
Instruct models from the ParScale-1.8B base models, trained on SmolTalk-1M to enable conversational capabilities.
-
ParScale/ParScale-1.8B-P8-Inst
Text Generation β’ 2B β’ Updated β’ 63 β’ 1 -
ParScale/ParScale-1.8B-P4-Inst
Text Generation β’ 2B β’ Updated β’ 12 β’ 1 -
ParScale/ParScale-1.8B-P2-Inst
Text Generation β’ 2B β’ Updated β’ 8 -
ParScale/ParScale-1.8B-P1-Inst
Text Generation β’ 2B β’ Updated β’ 73 β’ 1
Base models trained on 1T high-quality tokens, demonstrating strong competitiveness among existing SOTA small models (<2B).
Instruct models from the ParScale-1.8B base models, trained on SmolTalk-1M to enable conversational capabilities.
-
ParScale/ParScale-1.8B-P8-Inst
Text Generation β’ 2B β’ Updated β’ 63 β’ 1 -
ParScale/ParScale-1.8B-P4-Inst
Text Generation β’ 2B β’ Updated β’ 12 β’ 1 -
ParScale/ParScale-1.8B-P2-Inst
Text Generation β’ 2B β’ Updated β’ 8 -
ParScale/ParScale-1.8B-P1-Inst
Text Generation β’ 2B β’ Updated β’ 73 β’ 1
models
67

ParScale/ParScale-1.8B-P1-Inst
Text Generation
β’
2B
β’
Updated
β’
73
β’
1

ParScale/ParScale-1.8B-P2-Inst
Text Generation
β’
2B
β’
Updated
β’
8

ParScale/ParScale-1.8B-P4-Inst
Text Generation
β’
2B
β’
Updated
β’
12
β’
1

ParScale/ParScale-1.8B-P8-Inst
Text Generation
β’
2B
β’
Updated
β’
63
β’
1

ParScale/ParScale-1.8B-P1
Text Generation
β’
2B
β’
Updated
β’
5
β’
1

ParScale/ParScale-1.8B-P2
Text Generation
β’
2B
β’
Updated
β’
11

ParScale/ParScale-1.8B-P4
Text Generation
β’
2B
β’
Updated
β’
31
β’
1

ParScale/ParScale-Qwen-3B-P2-Python
Text Generation
β’
3B
β’
Updated
β’
12

ParScale/ParScale-Qwen-3B-P4-Python
Text Generation
β’
3B
β’
Updated
β’
16

ParScale/ParScale-Qwen-3B-P8-Python
Text Generation
β’
3B
β’
Updated
β’
45
datasets
0
None public yet