File size: 2,749 Bytes
c7c5494
 
 
 
 
 
 
 
 
587c99f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
title: README
emoji: 📈
colorFrom: purple
colorTo: green
sdk: static
pinned: false
---

<a href="https://symbiont.dev" target="_blank"><img src="https://github.com/ThirdKeyAI/Symbiont/raw/main/logo-hz.png"></a>

# 🐙 Symbiont on Hugging Face

**Secure AI-agent runtime + specialized SLMs for high-trust workflows.**
We build small, safety-forward models and datasets that power **policy-gated agents**—with verifiable audit trails, strong privacy, and reproducible evaluation.

> 💡 Symbiont = an agent framework + DSL designed for zero-trust, cryptographically auditable automation. This org hosts the **models, datasets, and Spaces** that plug into Symbiont (and any standard HF/Transformers stack).

---

Main repo: https://github.com/ThirdKeyAI/Symbiont  
Python SDK: https://github.com/ThirdKeyAI/symbiont-sdk-python  
JS SDK: https://github.com/ThirdKeyAI/symbiont-sdk-js  
Symbiont-Demos: https://github.com/ThirdKeyAI/symbiont-demos  

---

## 🔧 What you’ll find here (COMING SOON)

* **Models (SLMs & classifiers)**

  * Tool-use & routing specialists (reasoning-forward, small context).
  * Safety & policy classifiers (e.g., tool-call SAFE/UNSAFE).
  * Domain mini-experts (e.g., OSINT, compliance screening, data wrangling).

* **Datasets**

  * **Tool-calling** JSONL corpora (inputs → tools → expected calls).
  * **Safety** corpora (policy violations, redaction targets, PII/secret patterns).
  * **Domain** sets (financial filings snippets, entity linking, sanctions lexicons).

* **Spaces (demos)**

  * Policy-gated agent sandboxes (no secrets stored; ephemeral sessions).
  * Evaluation dashboards (benchmarks, error taxonomies, confusion matrices).

---

## 🛡️ Principles

* **Zero-trust by default:** models are small, auditable, and wrapped in policy.
* **No training leakage:** org models don’t learn from your inputs.
* **Reproducibility:** fixed seeds, dataset versions, and training cards per release.
* **Privacy first:** redaction pipelines + opt-in logging with content digests only.
* **Cryptographic traceability:** signed artifacts and hash-chained audit logs.

---

## 📣 Stay in touch

* Website/Docs: [https://symbiont.dev](https://symbiont.dev) *(or your preferred site)*
* Email: [[email protected]](mailto:[email protected])

---

## 📜 Citation

If you use our models or datasets in research, please cite:

```
@software{symbiont_ai_agents,
  title  = {Symbiont: A Secure, Policy-Gated AI-Agent Runtime and SLM Suite},
  year   = {2025},
  url    = {https://symbiont.dev},
  author = {ThirdKey.ai}
}
```

---

> Questions or special licensing needs (enterprise/redistribution)? Reach out—happy to help you deploy models inside **policy-enforced, auditable** agent workflows.