reaperdoesntknow commited on
Commit
93fd4a4
·
verified ·
1 Parent(s): da1f05d

Model save

Browse files
Files changed (1) hide show
  1. README.md +30 -141
README.md CHANGED
@@ -2,165 +2,54 @@
2
  library_name: transformers
3
  tags:
4
  - generated_from_trainer
5
- - text-generation
6
- - transformers
7
- - meta-math
8
- - qwen2
9
- - symbolic-ai
10
- - symbioticlm
11
-
12
  model-index:
13
  - name: SymLM
14
  results: []
15
- license: afl-3.0
16
- datasets:
17
- - meta-math/MetaMathQA
18
- language:
19
- - en
20
- base_model:
21
- - Qwen/Qwen2.5-0.5B
22
- pipeline_tag: text-generation
23
- ---
24
-
25
- # 🧠 SymLM
26
-
27
- **SymbioticLM** is a hybrid symbolic–neural language model that integrates a frozen transformer backbone (`Qwen2ForCausalLM`) with a suite of symbolic cognitive modules for adaptive, interpretable reasoning.
28
-
29
- ---
30
-
31
- ## 📐 Model Description
32
-
33
- The architecture fuses neural token-level generation with symbolic introspection and reasoning:
34
-
35
- - **Dynamic Thought Evolution with Helical Encoding and DNA-Inspired Memory (DTE-HDM)**
36
- Enables structured long-term memory and spiral-context encoding across tokens.
37
-
38
- - **Multi-Agent Symbiotic Response Mechanisms (M.A.S.R.M)**
39
- Coordinates symbolic-neural agents via gated attention and adaptive response layers.
40
-
41
- - **QwenExoCortex**
42
- Projects contextual hidden states from the Qwen model into a symbolic fusion space for reasoning and memory replay.
43
-
44
- - **Symbolic processors**
45
- Includes:
46
- - `ThoughtDynamicsLNN`
47
- - `Liquid / Crystalline Processors`
48
- - `Graph Reasoning with DNAConv`
49
- - A rolling `ThoughtMemory`
50
-
51
- This enables real-time fusion of symbolic thinking, token generation, and reasoning-aware language modeling.
52
-
53
- ---
54
-
55
- ## 🎯 Intended Uses & Limitations
56
-
57
- ### ✅ Intended Uses
58
-
59
- - **Mathematical reasoning and proof generation**
60
- Fine-tuned on *MetaMathQA*, optimized for symbolic Q&A, equation logic, and structured inference.
61
-
62
- - **Symbolic-cognitive AI research**
63
- Useful for studying attention modulation, memory replay, and neural-symbolic interface dynamics.
64
-
65
- - **Low-resource adaptation**
66
- Modular memory and projection design enables meaningful performance even with smaller datasets.
67
-
68
- - **Building adaptive cognition systems**
69
- Can serve as a symbolic kernel for reflective AI agents and knowledge evolution pipelines.
70
-
71
- ---
72
-
73
- ### ⚠️ Limitations
74
-
75
- - **Limited training scale**
76
- Trained on 25,000 MetaMathQA examples. Effective for symbolic form, but not yet broad generalization.
77
-
78
- - **No RLHF or alignment**
79
- Outputs are not tuned for safety or instruction alignment and may hallucinate.
80
-
81
- - **Fluency ≠ correctness**
82
- Symbolic fluency does not imply mathematically valid proofs. Verification is recommended.
83
-
84
- - **Not optimized for open-domain generation**
85
- This model prioritizes logic and structure over conversational depth.
86
-
87
  ---
88
 
89
- ## ⚙️ Training Procedure
 
90
 
91
- This checkpoint is currently in experimental phase.
92
 
93
- ### 🧪 Training Hyperparameters
94
 
95
- - **learning_rate**: `3e-5`
96
- - **train_batch_size**: `16`
97
- - **eval_batch_size**: `16`
98
- - **gradient_accumulation_steps**: `64`
99
- - **total_train_batch_size**: `1024`
100
- - **optimizer**: `AdamW`, betas=(0.9, 0.999), epsilon=1e-08
101
- - **lr_scheduler_type**: `cosine`
102
- - **warmup_steps**: `500`
103
- - **num_epochs**: `3`
104
- - **mixed_precision_training**: `Native AMP`
105
 
106
- ---
107
-
108
- ## 🧱 Framework Versions
109
 
110
- - 🤗 Transformers: `4.51.3`
111
- - 🧠 PyTorch: `2.7.0+cu126`
112
- - 📚 Datasets: `3.5.0`
113
- - 🔤 Tokenizers: `0.21.1`
114
 
115
- ---
116
 
117
- ## 📚 Research Foundations
118
 
119
- SymbioticLM builds upon a cohesive theoretical framework for dynamic reasoning and neuro-symbolic learning:
120
 
121
- ### 🔁 Multi-Agent Symbiosis and Dynamic Thought
122
 
123
- **Rapid Adaptation via Multi-Agent Symbiotic Response Mechanisms (M.A.S.R.M)**
124
- > A framework where symbolic and neural agents dynamically adapt via gated feedback, memory modulation, and agent-based specialization.
125
 
126
- **Focus**: Multi-agent control, reflective learning, contextual responsiveness
 
 
 
 
 
 
 
 
 
 
 
127
 
128
- ---
129
-
130
- ### 🧬 Dynamic Thought Evolution with Helical Encoding and DNA-Inspired Memory (DTE-HDM)
131
-
132
- > A memory structure inspired by biological helices, enabling thought persistence through spiral-layered contextual encodings across time.
133
-
134
- **Focus**: Long-term token evolution, normalized replay, thought continuity
135
-
136
- ---
137
 
138
- ### 🧠 Integrating DTE-HDM + M.A.S.R.M for Adaptive AI
139
 
140
- > Combines symbolic evolution and multi-agent adaptation to construct an LLM that reflects, adapts, and deepens reasoning through internal dynamics.
141
-
142
- **Result**: A system that *learns faster*, *adapts deeper*, and *thinks symbolically*
143
-
144
- ---
145
-
146
- ### 📐 Theoretical Underpinning
147
-
148
- **The Analytic Foundations Theorem (AFT)**
149
- > A rigorous, measure-theoretic replacement for classical calculus: replaces pointwise derivatives with discrepancy-driven integral convergence across vanishing sets.
150
-
151
- **Applies to**:
152
- - Symbolic gradients
153
- - Gradient-free optimization
154
- - Discrete logic approximation in function spaces
155
-
156
- ---
157
-
158
- These form the **mathematical and architectural core** of SymbioticLM, enabling:
159
-
160
- - 🧠 *Neuro-symbolic cognitive evolution*
161
- - 🔁 *Multi-agent dynamic feedback coordination*
162
- - 📏 *Formal memory through discrepancy-based logic*
163
-
164
- ---
165
 
 
166
 
 
 
 
 
 
2
  library_name: transformers
3
  tags:
4
  - generated_from_trainer
 
 
 
 
 
 
 
5
  model-index:
6
  - name: SymLM
7
  results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
 
13
+ # SymLM
14
 
15
+ This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
16
 
17
+ ## Model description
 
 
 
 
 
 
 
 
 
18
 
19
+ More information needed
 
 
20
 
21
+ ## Intended uses & limitations
 
 
 
22
 
23
+ More information needed
24
 
25
+ ## Training and evaluation data
26
 
27
+ More information needed
28
 
29
+ ## Training procedure
30
 
31
+ ### Training hyperparameters
 
32
 
33
+ The following hyperparameters were used during training:
34
+ - learning_rate: 5e-05
35
+ - train_batch_size: 32
36
+ - eval_batch_size: 8
37
+ - seed: 42
38
+ - gradient_accumulation_steps: 64
39
+ - total_train_batch_size: 2048
40
+ - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
+ - lr_scheduler_type: cosine
42
+ - lr_scheduler_warmup_steps: 100
43
+ - num_epochs: 1
44
+ - mixed_precision_training: Native AMP
45
 
46
+ ### Training results
 
 
 
 
 
 
 
 
47
 
 
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
+ ### Framework versions
51
 
52
+ - Transformers 4.51.3
53
+ - Pytorch 2.7.0+cu126
54
+ - Datasets 3.5.0
55
+ - Tokenizers 0.21.1