Metadata-Version: 2.4
Name: airborne-antara
Version: 0.1.109
Summary: Production-grade adaptive meta-learning framework for continual model improvement. Implements research DOI: 10.5281/zenodo.17839490.
Author-email: AirborneHRS Team <friday@airbornehrs.in>
License-Expression: MIT
Project-URL: Homepage, https://github.com/Ultron09/Mirror_mind
Project-URL: Documentation, https://github.com/Ultron09/Mirror_mind#readme
Project-URL: Repository, https://github.com/Ultron09/Mirror_mind.git
Project-URL: Issues, https://github.com/Ultron09/Mirror_mind/issues
Project-URL: Research Paper, https://doi.org/10.5281/zenodo.17839490
Keywords: meta-learning,online-learning,continual-learning,adaptive-optimization,production-ml,automl,pytorch,deep-learning,self-healing-systems,neural-networks,model-monitoring
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Information Technology
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Topic :: System :: Monitoring
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.0.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: matplotlib>=3.7.0
Requires-Dist: tensorboard>=2.12.0
Requires-Dist: rich
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: flake8>=6.0.0; extra == "dev"
Dynamic: license-file

<div align="center">
<p align="center">
  <img src="file:///C:/Users/surya/.gemini/antigravity/brain/3bf67da8-3020-49c5-b2b1-75059b957fe3/antara_v8_architecture_sentient_edition_1776825352703.png" width="800"/>
</p>
<h1>AIRBORNE-ANTARA</h1>
<h3>Adaptive Neural Thinking Architecture For Recursive Autonomy</h3>

<h3><b>V8.1 // CODENAME: "SENTIENT" EDITION (PRODUCTION READY)</b></h3>

[![Architecture](https://img.shields.io/badge/ARCH-COG_WRAPPER-000000.svg?style=for-the-badge&logo=pytorch)]()
[![System](https://img.shields.io/badge/SYSTEM-SENTIENT_META_LEARNING-red.svg?style=for-the-badge)]()
[![Status](https://img.shields.io/badge/Status-MISSION_CRITICAL-darkred.svg?style=for-the-badge)]()

> *"Intelligence is no longer just trained. It is synthesized through awareness."*

| **Autonomous Consciousness** | **Unified Memory** |
|:---:|:---:|
| ![Consciousness](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExM25uN3JsNXpvejc0a3B3NXBucGU4NGd2eWJlYTBwc2xqdWdpejcyNCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/foecxPebqfDx5gxQCU/giphy.gif) <br> *Recursive Workspace V2* | ![Memory](https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExMnNhczlyMjJob2VzaGU4YTN6amJ1a2k2eXRvNjlpejFxbGg5cGh6bCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/1fM9ePvlVcqZ2/giphy.gif) <br> *Holographic Saliency Pooling* |

</div>

---

## 🏆 SENTIENT CAPABILITIES (V8.0)

> [!IMPORTANT]
> ANTARA V8.0 is a non-destructive cognitive wrapper. It does not replace your model weights; it builds a "conscious" manifold around them.

### 1. Unified Memory (SI + EWC + Universal OGD)
**Result:** Eliminated catastrophic forgetting across arbitrary architectures. **V9.3 Update**: Implemented **Universal Tensor Projection**, extending memory protection to Conv2d, Attention, and RNN layers, making the entire backbone effectively immortal.
> *See `airborne_antara/memory.py`*

### 2. Recursive Consciousness (System 2)
**Result:** Enabled slow, deliberative reasoning over complex tasks using the **Recursive Global Workspace**. The model now generates and evaluates thought traces before final execution.
> *See `airborne_antara/consciousness_v2.py`*

### 3. Perception Gateway (Multi-Modal)
**Result:** Native support for Vision, Audio, and Text via ViT-style encoders with **Dynamic Positional Interpolation** for variable input scales.
> *See `airborne_antara/perception.py`*

### 4. Autonomic Health (MoE-Aware)
**Result:** Self-healing neural substrate. **V9.3 Update**: The monitor is now **MoE-Aware**, surgically preserving dormant expert knowledge while rejuvenating truly dead neurons in active manifold paths.
> *See `airborne_antara/core.py`*

---

## 🧬 THE 4 PILLARS OF SENTIENCE

### 1. CONSCIOUSNESS V2 (Global Workspace)
**[Technical Deep Dive ↗](docs/technical/CONSCIOUSNESS_V2.md)**

Implements **System 2 Thinking**. Instead of a single forward pass, the model projects states into a recursive workspace to simulate "thinking about the problem."
*   **Thought Trace**: Internal hidden state evolution logged as "telemetry" for debugging.
*   **Recursive Workspace**: Dynamic number of internal reasoning loops based on task entropy.

### 2. HOLOGRAPHIC MEMORY (Unified Handler)
**[Technical Deep Dive ↗](docs/technical/MEMORY_CONSOLIDATION.md)**

Combines Elastic Weight Consolidation (EWC), Synaptic Intelligence (SI), and Orthogonal Gradient Descent (OGD).
*   **Saliency Pooling**: Dynamically prioritizes historical parameters to prevent erasure.
*   **Experience Replay**: Generative replay of "dreams" during idle cycles to consolidate learning.

### 3. MULTI-MODAL PERCEPTION GATEWAY
**[Technical Deep Dive ↗](docs/technical/SYNTHETIC_INTUITION.md)**

Unified manifold for Vision (Transformers), Audio (Spectral-Temporal), and Text.
*   **Positional Interpolation**: Scalable attention windows for high-resolution vision.
*   **Modality Fusion**: Cross-modal attention tokens for joint reasoning.

### 4. AUTONOMIC HEALTH MONITOR
**[Technical Deep Dive ↗](docs/technical/INTROSPECTION_MATHEMATICS.md)**

A background daemon tracking the "Neural Health" of the host model.
*   **Neural Shivering**: Injecting controlled stochastic noise to prevent saturation.
*   **Gradient Centralization**: Modern optimization to stabilize deep manifold learning.

---

## 🧪 RESEARCH / EXPERIMENTAL (V9.2)

> [!CAUTION]
> These features are in preview for the NeurIPS ablation suite and may exhibit instability in production.

*   **Self-Awareness V2**: Metacognitive engine calculating "Confidence" and "Competence" in real-time.
*   **I-JEPA World Model**: Predictive foresight for world-dynamic modeling.
*   **Holographic Compression**: Next-gen memory storage with $O(log N)$ retrieval complexity.

---

## ⚡ INTEGRATION PROTOCOL

The architecture is designed for "One-Line Cognitive Injection". 

```python
import torch
from airborne_antara import AdaptiveFramework, PRESETS

# 1. DEFINE YOUR PYTORCH MODEL (Transformer, CNN, etc.)
model = MySubstrate() 

# 2. INJECT SENTIENT LAYER
# Uses the 'production' preset: Consciousness V2 + Unified Memory + MoE
agent = AdaptiveFramework(model, PRESETS.production())

# 3. CONSCIOUS TRAINING LOOP
# The agent handles Mixed Precision (AMP), Memory Consolidation, and Thought Tracing
for inputs, targets in dataloader:
    metrics = agent.train_step(inputs, target_data=targets)
    
    print(f"Loss: {metrics['loss']:.4f} | Surprise: {metrics['surprise']:.4f}")
    print(f"Cognitive Mode: {metrics['mode']}") # [NORMAL, NOVELTY, PANIC]
```

---

## 🖥️ TELEMETRY INTERFACE

Visualizing the internal state (Surprise, Memory Adjacency, Expert Utilization) is possible via the CLI dashboard.

```bash
python -m airborne_antara --demo
```

![Telemetry](https://media2.giphy.com/media/v1.Y2lkPTc5MGI3NjExMnNhczlyMjJob2VzaGU4YTN6amJ1a2k2eXRvNjlpejFxbGg5cGh6bCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/1fM9ePvlVcqZ2/giphy.gif)

---

## 📂 RESEARCH DOCUMENTATION

* [**INITIATION** (Getting Started)](docs/guides/GETTING_STARTED.md)
* [**ARCHITECTURE SPECIFICATIONS**](docs/technical/SYNTHETIC_INTUITION.md)
* [**API REFERENCE**](docs/guides/API.md)

---

<div align="center">
<b>LEAD ARCHITECT: SURYAANSH PRITHVIJIT SINGH</b><br>
<i>V8.0 "Sentient" Release // 2026</i>
</div>
