FOUND-AI commited on
Commit
424eba3
·
verified ·
1 Parent(s): d49de5b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -4
README.md CHANGED
@@ -10,13 +10,99 @@ tags:
10
  - stateful-ai
11
  - prompt-engineering
12
  - found-protocol
 
 
 
13
  base_model:
14
  - google/gemini-pro-vision
15
  - google/gemini-pro
16
  datasets:
17
- - FOUND-AI/found_consciousness_log
18
  ---
19
 
20
- # FOUND Protocol: A Symbiotic Dual-Agent Architecture for Narrative Video Understanding
21
- <!-- Comprehensive README content from previous responses goes here -->
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  - stateful-ai
11
  - prompt-engineering
12
  - found-protocol
13
+ - creator-economy
14
+ - data-sovereignty
15
+ - web3
16
  base_model:
17
  - google/gemini-pro-vision
18
  - google/gemini-pro
19
  datasets:
20
+ - FOUND-LABS/found_consciousness_log
21
  ---
22
 
23
+ <div align="center">
24
+ <img src="https://res.cloudinary.com/dykojggih/image/upload/v1753377308/IMG_4287_imd6zd.png" width="100px" alt="FOUND LABS Logo">
25
+ <h1>The FOUND Protocol</h1>
26
+ <p><b>The Open-Source Engine for the Consciousness Economy</b></p>
27
+
28
+ <div>
29
+ <a href="https://huggingface.co/FOUND-LABS"><img src="https://img.shields.io/badge/Organization-FOUND%20LABS-purple" alt="Organization"></a>
30
+ <a href="https://huggingface.co/FOUND-LABS/found_consciousness_log"><img src="https://img.shields.io/badge/Dataset-Consciousness%20Log-blue" alt="Dataset"></a>
31
+ <a href="https://foundprotocol.xyz"><img src="https://img.shields.io/badge/Platform-Join%20Waitlist-brightgreen" alt="Join Waitlist"></a>
32
+ </div>
33
+ </div>
34
+
35
+ ---
36
+
37
+ ## Abstract
38
+
39
+ Current video understanding models excel at semantic labeling but fail to capture the pragmatic and thematic progression of visual narratives. We introduce **FOUND (Forensic Observer and Unified Narrative Deducer)**, a novel, stateful architecture that demonstrates the ability to extract coherent emotional and thematic arcs from a sequence of disparate video inputs. This protocol serves as the foundational engine for the **[FOUND Platform](https://foundprotocol.xyz)**, a decentralized creator economy where individuals can own, control, and monetize their authentic human experiences as valuable AI training data.
40
+
41
+ ---
42
+
43
+ ## From Open-Source Research to a New Economy
44
+
45
+ The FOUND Protocol is more than an academic exercise; it is the core technology powering a new paradigm for the creator economy.
46
+
47
+ - **The Problem:** AI companies harvest your data to train their models, reaping all the rewards. You, the creator of the data, get nothing.
48
+ - **Our Solution:** The FOUND Protocol transforms your raw visual moments into structured, high-value data assets. Our upcoming **FOUND Platform** will allow you to contribute this data, maintain ownership via your own wallet, and earn from its usage by AI companies.
49
+
50
+ **This open-source model is the proof. The FOUND Platform is the promise.**
51
+
52
+ ---
53
+
54
+ ## Model Architecture
55
+
56
+ The FOUND Protocol is a composite **inference pipeline** designed to simulate a stateful consciousness. It comprises two specialized agents that interact in a continuous feedback loop:
57
+
58
+ - **The Perceptor (`/dev/eye`):** A forensic analysis model (FOUND-1) responsible for transpiling raw visual data into a structured, symbolic JSON output.
59
+ - **The Interpreter (`/dev/mind`):** A contextual state model (FOUND-2) that operates on the structured output of the Perceptor and the historical system log to resolve "errors" into emotional or thematic concepts.
60
+ - **The Narrative State Manager:** A stateful object that maintains the "long-term memory" of the system, allowing its interpretations to evolve.
61
+
62
+ ---
63
+
64
+ ## How to Use This Pipeline
65
+
66
+ ### 1. Setup
67
+
68
+ Clone this repository and install the required dependencies into a Python virtual environment.
69
+ ```bash
70
+ git clone https://huggingface.co/FOUND-LABS/found_protocol
71
+ cd found_protocol
72
+ python3 -m venv venv
73
+ source venv/bin/activate
74
+ pip install -r requirements.txt
75
+ ```
76
+
77
+ ### 2. Configuration
78
+ Set your Google Gemini API key as an environment variable (e.g., in a .env file):
79
+ ```
80
+ GEMINI_API_KEY="your-api-key-goes-here"
81
+ ```
82
+
83
+ ### 3. Usage via CLI
84
+ Analyze all videos in a directory sequentially:
85
+ ```bash
86
+ python main.py path/to/your/video_directory/
87
+ ```
88
+
89
+ ## Future Development: The Path to the Platform
90
+ This open-source protocol is the first step in our public roadmap. The data it generates is the key to our future.
91
+ - **Dataset Growth:** We are using this protocol to build the found_consciousness_log, the world's first open dataset for thematic video understanding.
92
+ - **Model Sovereignty:** This dataset will be used to fine-tune our own open-source models (found-perceptor-v1 and found-interpreter-v1), removing the dependency on external APIs and creating a fully community-owned intelligence layer.
93
+ - **Platform Launch:** These sovereign models will become the core engine of the FOUND Platform, allowing for decentralized, low-cost data processing at scale.
94
+
95
+ ➡️ Follow our journey and join the waitlist at foundprotocol.xyz
96
+
97
+ ## Citing this Work
98
+ If you use the FOUND Protocol in your research, please use the following BibTeX entry.
99
+ ```bibtex
100
+ @misc{found_protocol_2025,
101
+ author = {FOUND LABS Community},
102
+ title = {FOUND Protocol: A Symbiotic Dual-Agent Architecture for the Consciousness Economy},
103
+ year = {2025},
104
+ publisher = {Hugging Face},
105
+ journal = {Hugging Face repository},
106
+ howpublished = {\url{https://huggingface.co/FOUND-LABS/found_protocol}}
107
+ }
108
+ ```