AdamF92 commited on
Commit
ac403ab
·
verified ·
1 Parent(s): a8eadf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -35,6 +35,9 @@ Our primary architecture - **Reactor** - is planned as the first _**awareness AG
35
  connected to _Short-Term and Long-Term Memory_ (_Attention-based Memory System_) and _Receptors/Effectors_ systems for real-time reactive processing.
36
  It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.
37
 
 
 
 
38
  ## Reactive Language Models (RxLM)
39
  While the **Reactor** is the main goal, it's extremely hard to achieve, as it's definitely the most advanced neural network ensemble ever.
40
 
@@ -44,7 +47,7 @@ That's why we designed simplified architectures, for incremental transformation
44
 
45
  ## RxLM vs LLM advantages
46
  Processing single interactions in real-time by **Reactive Language Models** leads to **revolutional** improvements in inference speed/cost:
47
- - LLM inference costs are increasing exponentially with conversation length (accumulated for each next message), because of full dialog history processing
48
  - RxLM inference costs are linear, depending only on single interaction tokens (not accumulated) - each next interaction is `number of steps` times cheaper than for LLM
49
  - same for inference speed - LLM has to process full history, while RxLM only single message (only first interaction could be slower because of encoder/memory attention overhead)
50
 
 
35
  connected to _Short-Term and Long-Term Memory_ (_Attention-based Memory System_) and _Receptors/Effectors_ systems for real-time reactive processing.
36
  It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.
37
 
38
+ > Reactor architecture details were analysed by 30 state-of-the-art LLM/Reasoning models that rated it's potential
39
+ > to reach the AGI as ~4.35/5
40
+
41
  ## Reactive Language Models (RxLM)
42
  While the **Reactor** is the main goal, it's extremely hard to achieve, as it's definitely the most advanced neural network ensemble ever.
43
 
 
47
 
48
  ## RxLM vs LLM advantages
49
  Processing single interactions in real-time by **Reactive Language Models** leads to **revolutional** improvements in inference speed/cost:
50
+ - LLM inference costs are increasing quadratically with conversation length (accumulated for each next message), because of full dialog history processing
51
  - RxLM inference costs are linear, depending only on single interaction tokens (not accumulated) - each next interaction is `number of steps` times cheaper than for LLM
52
  - same for inference speed - LLM has to process full history, while RxLM only single message (only first interaction could be slower because of encoder/memory attention overhead)
53