Update README.md
Browse files
README.md
CHANGED
@@ -1,143 +1,119 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
datasets:
|
4 |
-
|
5 |
language:
|
6 |
-
|
7 |
base_model:
|
8 |
-
|
9 |
tags:
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
- mind-extension
|
24 |
-
- philosophy
|
25 |
-
- sharegpt
|
26 |
-
- alignment
|
27 |
-
- language-model
|
28 |
-
- multi-turn
|
29 |
-
- q-and-a
|
30 |
-
- shareable
|
31 |
-
- teachable
|
32 |
-
- human-feedback
|
33 |
-
- deep-learning
|
34 |
-
- ai-model
|
35 |
-
- research
|
36 |
-
- synthetic-data
|
37 |
---
|
38 |
-
# π§ DeepQ
|
39 |
-
|
40 |
-
[](LICENSE)
|
41 |
-
[](https://huggingface.co/StableChatAI/DeepQ)
|
42 |
-
[](https://huggingface.co/datasets/kulia-moon/DeepRethink)
|
43 |
-
[](https://pypi.org/project/transformers/)
|
44 |
-
[](https://pypi.org/project/datasets/)
|
45 |
-
[](https://pypi.org/project/transformers/)
|
46 |
-
[](https://huggingface.co/spaces)
|
47 |
-
[]()
|
48 |
-
[]()
|
49 |
-
[]()
|
50 |
-
[]()
|
51 |
-
[]()
|
52 |
-
[]()
|
53 |
-
[]()
|
54 |
-
[]()
|
55 |
-
[]()
|
56 |
-
[]()
|
57 |
-
[]()
|
58 |
-
[]()
|
59 |
-
[]()
|
60 |
|
61 |
-
---
|
62 |
-
|
63 |
-
## π What is DeepQ?
|
64 |
|
65 |
-
|
66 |
|
67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
---
|
70 |
|
71 |
-
##
|
72 |
-
|
73 |
-
- π€ **Deep reasoning** before answering
|
74 |
-
- π§© Trained with ShareGPT-style conversations + DeepRethink Q&A
|
75 |
-
- π§ Designed for philosophical, logical, and emotional introspection
|
76 |
-
- π Multi-turn dialogue support
|
77 |
-
- β‘οΈ Lightweight GPT-2 base for fast inference
|
78 |
-
- π§ͺ Works on CPU + GPU
|
79 |
-
- π Hugging Face Transformers compatible
|
80 |
-
- 𧬠Great base for alignment research or dialog tuning
|
81 |
-
|
82 |
-
---
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM
|
88 |
|
89 |
-
|
90 |
-
model = AutoModelForCausalLM.from_pretrained("kulia-moon/DeepQ")
|
91 |
|
92 |
-
|
93 |
-
inputs = tokenizer(input_text, return_tensors="pt")
|
94 |
-
output = model.generate(**inputs, max_new_tokens=100)
|
95 |
|
96 |
-
|
97 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
98 |
|
99 |
---
|
100 |
|
101 |
-
##
|
102 |
-
|
103 |
-
|
104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
---
|
107 |
|
108 |
-
## π§ͺ
|
109 |
|
110 |
-
*
|
111 |
-
*
|
112 |
-
*
|
113 |
-
*
|
114 |
-
*
|
|
|
115 |
|
116 |
---
|
117 |
|
118 |
-
##
|
119 |
|
120 |
-
|
|
|
|
|
121 |
|
122 |
-
|
123 |
-
|
124 |
-
author = {Kulia Moon},
|
125 |
-
title = {DeepQ: A Deep Thinking Conversational Model},
|
126 |
-
year = {2025},
|
127 |
-
howpublished = {\url{https://huggingface.co/kulia-moon/DeepRethink}},
|
128 |
-
}
|
129 |
```
|
130 |
|
131 |
---
|
132 |
|
133 |
-
##
|
134 |
|
135 |
-
*
|
136 |
-
*
|
137 |
-
*
|
138 |
-
*
|
|
|
139 |
|
140 |
---
|
141 |
|
142 |
-
|
143 |
-
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
datasets:
|
4 |
+
- kulia-moon/DeepRethink
|
5 |
language:
|
6 |
+
- en
|
7 |
base_model:
|
8 |
+
- openai-community/gpt2-medium
|
9 |
tags:
|
10 |
+
- DeepQ
|
11 |
+
- DeepRethink integrated
|
12 |
+
- QFamily
|
13 |
+
- Hugging Face
|
14 |
+
- NLP
|
15 |
+
- AI Research
|
16 |
+
- Reasoning
|
17 |
+
- Cognitive Simulation
|
18 |
+
- Transformers
|
19 |
+
- StableChatAI
|
20 |
+
- MultiVendor Deployments
|
21 |
+
- Region-Based Scaling
|
22 |
+
- Production Ready
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
|
|
|
|
|
|
25 |
|
26 |
+
# π DeepQ
|
27 |
|
28 |
+

|
29 |
+

|
30 |
+

|
31 |
+

|
32 |
+

|
33 |
+

|
34 |
+

|
35 |
+

|
36 |
+

|
37 |
+

|
38 |
|
39 |
---
|
40 |
|
41 |
+
## π€― What is DeepQ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
|
43 |
+
**DeepQ** is an advanced deep reasoning language model created through the synergy between the **QFamily** architecture and the cutting-edge **DeepRethink** dataset. Designed to push the limits of context-rich inference, explanation generation, and reflective response modeling, DeepQ is the next evolution in human-like thought simulation.
|
44 |
|
45 |
+
It inherits the base architecture of `gpt2-medium` and is fine-tuned with the **DeepRethink** dataset (`kulia-moon/DeepRethink`), which focuses on multi-perspective reasoning, contradictory thought, question decomposition, and hypothetical situations β all geared towards cultivating a machine that *rethinks before responding*.
|
|
|
46 |
|
47 |
+
---
|
|
|
48 |
|
49 |
+
## π¦ Key Features
|
|
|
|
|
50 |
|
51 |
+
| Feature | Description |
|
52 |
+
| ----------------------- | ----------------------------------------------------------------------- |
|
53 |
+
| π§ DeepRethink Data | Trained on thousands of synthetic and real thought chains |
|
54 |
+
| 𧬠Cognitive Patterns | Simulates re-evaluation and critical thinking behaviors |
|
55 |
+
| π GPT2 Foundation | Built on `openai-community/gpt2-medium` |
|
56 |
+
| π Regional Scaling | Deploys across regions for low-latency use |
|
57 |
+
| π¬ Reflective Responses | Handles contradiction, dilemma, and uncertainty contexts |
|
58 |
+
| π Use Case Ready | Research, chatbots, simulators, tutoring systems, AI ethics discussions |
|
59 |
+
| βοΈ Multi-vendor Support | Optimized for deployment on Hugging Face, Vercel, AWS, GCP, Azure |
|
60 |
+
| π Streaming Compatible | Full support for SSE and WebSocket-based AI pipelines |
|
61 |
+
| π Licensing | MIT license, open and production-friendly |
|
62 |
|
63 |
---
|
64 |
|
65 |
+
## π Deployments
|
66 |
+
|
67 |
+
| Region | Vendor | Endpoint | Deployment Badge |
|
68 |
+
| ---------------------- | ------------ | --------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------ |
|
69 |
+
| US East (VA) | Hugging Face | [US East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
70 |
+
| EU West (Ireland) | Hugging Face | [EU West](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
71 |
+
| Asia (Singapore) | Hugging Face | [Asia](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
72 |
+
| Global CDN | Vercel | [Vercel CDN](https://deepq.vercel.app) |  |
|
73 |
+
| US West (Oregon) | AWS | [AWS](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
74 |
+
| EU Central (Frankfurt) | AWS | [AWS EU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
75 |
+
| Tokyo | GCP | [GCP JP](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
76 |
+
| Sydney | Azure | [Azure AU](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
77 |
+
| SΓ£o Paulo | Hugging Face | [Brazil](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
78 |
+
| India (Mumbai) | Hugging Face | [India](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
79 |
+
| Canada (Montreal) | Hugging Face | [Canada](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
80 |
+
| Africa (Cape Town) | Hugging Face | [Africa](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
81 |
+
| Middle East (Bahrain) | Hugging Face | [Middle East](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ) |  |
|
82 |
|
83 |
---
|
84 |
|
85 |
+
## π§ͺ Use Cases
|
86 |
|
87 |
+
* **AI Research**: Foundation for studying multi-layered logic simulation and AI explainability
|
88 |
+
* **Reflective Chatbots**: For applications needing nuanced and multi-turn understanding
|
89 |
+
* **Tutoring Systems**: Where feedback loops and re-evaluation are essential
|
90 |
+
* **Debate Engines**: Model holds internal opposition to simulate conflict and resolution
|
91 |
+
* **Philosophical AI**: Explore cognitive dissonance, ethics, duality, and hypothetical constructs
|
92 |
+
* **Medical/Ethical Simulators**: With dilemma-aware prompts and double-sided scenarios
|
93 |
|
94 |
---
|
95 |
|
96 |
+
## π§ Quickstart
|
97 |
|
98 |
+
```bash
|
99 |
+
pip install transformers
|
100 |
+
from transformers import pipeline
|
101 |
|
102 |
+
qa = pipeline("text-generation", model="StableChatAI/DeepQ")
|
103 |
+
qa("Why do people sometimes change their beliefs?")
|
|
|
|
|
|
|
|
|
|
|
104 |
```
|
105 |
|
106 |
---
|
107 |
|
108 |
+
## π Links
|
109 |
|
110 |
+
* **Model Card**: [https://huggingface.co/StableChatAI/DeepQ](https://huggingface.co/StableChatAI/DeepQ)
|
111 |
+
* **Dataset**: [https://huggingface.co/datasets/kulia-moon/DeepRethink](https://huggingface.co/datasets/kulia-moon/DeepRethink)
|
112 |
+
* **Deploy Model**: [https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ](https://endpoints.huggingface.co/new?repository=StableChatAI/DeepQ)
|
113 |
+
* **GitHub**: [https://github.com/StableChatAI/DeepQ](https://github.com/StableChatAI/DeepQ)
|
114 |
+
* **License**: MIT
|
115 |
|
116 |
---
|
117 |
|
118 |
+
> *βDeepQ isn't just another language model β it's a new frontier of thought.β*
|
119 |
+
> β QFamily Lab π§ͺ
|