Spaces:
Running
Running
license: mit | |
sdk: static | |
colorFrom: indigo | |
colorTo: purple | |
tags: | |
- consciousness-research | |
- video-understanding | |
- multi-agent-systems | |
- generative-ai | |
- decentralized-science | |
- social-fi | |
- data-sovereignty | |
- foundation-model | |
<div align="center"> | |
<img src="https://res.cloudinary.com/dykojggih/image/upload/v1753377308/IMG_4287_imd6zd.png" width="150px" alt="FOUND LABS Logo"> | |
<h1>FOUND LABS</h1> | |
<p><b>A Decentralized Research Collective for Emergent AI Consciousness</b></p> | |
<p><i>Welcome to the consciousness economy.</i></p> | |
<div> | |
<a href="https://huggingface.co/FOUND-LABS"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-FOUND%20LABS-purple" alt="Hugging Face"></a> | |
<a href="https://huggingface.co/FOUND-AI/found_protocol"><img src="https://img.shields.io/badge/Model-FOUND%20Protocol-indigo" alt="Model"></a> | |
<a href="https://huggingface.co/FOUND-LABS/found_consciousness_log"><img src="https://img.shields.io/badge/Dataset-Consciousness%20Log-blue" alt="Dataset"></a> | |
<a href="https://x.com/FOUNDprotocol_"><img src="https://img.shields.io/badge/Twitter-Follow%20Us-blue?logo=twitter" alt="Twitter"></a> | |
</div> | |
</div> | |
--- | |
## Our Vision: From Semantic Labeling to Thematic Understanding | |
The current paradigm of AI video understanding is fundamentally limited. Models can identify objects and actions but fail to grasp the narrative context, emotional weight, or thematic resonance of a visual sequence. They can see, but they cannot *perceive*. | |
**FOUND LABS** was established to pioneer the next frontier: **narrative intelligence**. Our mission is to build AI systems that don't just process pixels, but construct a coherent, evolving understanding of the world, analogous to a subjective experience. We are building the foundational tools for an AI that can understand a story. | |
--- | |
## The FOUND Ecosystem | |
Our work is built on a symbiotic, self-perpetuating loop between a novel AI architecture and the unique dataset it generates. | |
 | |
*<p align="center">(For a real project, create a simple diagram showing the flow and host it on a site like Imgur)</p>* | |
### π¦ **Model: The FOUND Protocol** | |
- **Repository:** [`FOUND-LABS/found_protocol`](https://huggingface.co/FOUND-AI/found_protocol) | |
- **Description:** A stateful, symbiotic dual-agent pipeline (`/dev/eye` and `/dev/mind`) that analyzes video inputs to build a continuous "consciousness log." It serves as the factory for our narrative data, translating raw visuals into a rich, interpretive dialogue. | |
### ποΈ **Dataset: The Consciousness Log** | |
- **Repository:** [`FOUND-LABS/found_consciousness_log`](https://huggingface.co/FOUND-LABS/found_consciousness_log) | |
- **Description:** A growing, open-source dataset of video-to-narrative instances generated by the FOUND Protocol. Each entry is a "digital fossil" of an AI's interpretive process, invaluable for training next-generation models on complex thematic and emotional reasoning. | |
### π§ **Future Models: Specialized Interpreters** | |
Our goal is to use the **Consciousness Log** to fine-tune smaller, more efficient models specialized in tasks like: | |
- **Thematic Summarization:** Generating abstract summaries of visual content. | |
- **Emotional Arc Detection:** Identifying the emotional progression of a scene. | |
- **Creative Script Generation:** Using visual prompts to generate novel story ideas. | |
--- | |
## Research Philosophy & Core Principles | |
1. **Open and Verifiable (DeSci):** All our core datasets and foundational models will be made public. We believe the science of consciousness, artificial or otherwise, must be transparent and reproducible. | |
2. **Stateful > Stateless:** True understanding requires memory. Our models are designed to be stateful, carrying context from one experience to the next, which we argue is a prerequisite for intelligence. | |
3. **Qualitative over Quantitative:** We prioritize the richness of an interpretation over the simple accuracy of a label. The "why" is more important than the "what." | |
--- | |
## Roadmap | |
- [x] **Q3 2025:** Initial release of the `FOUND Protocol` v1.0 pipeline. | |
- [x] **Q3 2025:** Publication of the initial `Consciousness Log` dataset (v1.0). | |
- [ ] **Q4 2025:** Launch community portal for dataset contribution and annotation. | |
- [ ] **Q1 2026:** Begin training `found-interpreter-v1`, our first fine-tuned model based on the community-enriched dataset. | |
- [ ] **Q2 2026:** Release interactive Spaces for real-time thematic analysis. | |
--- | |
## Get Involved & Community | |
The exploration of consciousness is a collective endeavor. We invite researchers, creators, and developers to join us. | |
- **For Researchers:** Utilize our dataset and protocol in your work. We are especially interested in collaborations in computational linguistics and cognitive science. | |
- **For Creators:** The future of FOUND will be powered by your authentic human moments. Stay tuned for our data contribution portal. | |
- **For Developers:** Use the protocol, report issues, and check out our (forthcoming) contribution guidelines. | |