Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -7,10 +7,26 @@ sdk: static
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
# Reactive AI
|
11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
More info soon
|
14 |
|
15 |
## RxNN Platform
|
16 |
-
We are working on complete Reactive Neural Networks development framework - [RxNN github](https://github.com/RxAI-dev/RxNN)
|
|
|
|
|
|
|
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
# Reactive AI
|
11 |
+
We are working on our own idea of Reactive Neural Networks (RxNN) - special kind of memory-augmented neural networks, that keeps state/memory
|
12 |
+
between interactions/sequences instead of between tokens/elements in sequence and provides reactive communication patterns.
|
13 |
+
|
14 |
+
Our primary architecture - **Reactor** - is planned as the first _**awareness AGI model**_, that's modelling awareness as an _Infinite Chain-of-Thoughts_,
|
15 |
+
connected to _Short-Term and Long-Term Memory_ (_Attention-based Memory System_) and _Receptors/Effectors_ systems for real-time reactive processing.
|
16 |
+
It will be able to constantly and autonomously learn from interactions in _Continouos Live Learning_ process.
|
17 |
+
|
18 |
+
While the **Reactor** is the main goal, it's extremely hard to achieve, as it's definitely the most advanced neural network ensemble ever.
|
19 |
+
|
20 |
+
That's why we designed simplified architectures, for incremental transformation from language/reasoning models to awareness model:
|
21 |
+
- **Reactive Transformer** is introducing _Attention-based Memory System_ and adding _Short-Term Memory_ to Transformer language models
|
22 |
+
- **Preactor** is adding _Long-Term Memory_ and ability to learn from interactions
|
23 |
+
|
24 |
+
We are currently working on **Reactive Transformer Proof-of-Concept - RxT-Alpha**, that will be published soon
|
25 |
|
26 |
More info soon
|
27 |
|
28 |
## RxNN Platform
|
29 |
+
We are working on complete Reactive Neural Networks development framework - [RxNN github](https://github.com/RxAI-dev/RxNN)
|
30 |
+
|
31 |
+
## Additional Research
|
32 |
+
- **Sparse Query Attention** - the most cost-effective GQA variant, reducing training time/cost by ~10%. Research in progress
|