Metadata-Version: 2.4
Name: torelora
Version: 1.1.1
Summary: TOREloRA v1.1.1 — LoRA fine-tuning + ASGP v3 · CORAP20 · SSYP · OMNISYNC+ · BBGH AGI · ZeroFault v2
Home-page: https://github.com/toretekno/torelora
Author: TORE TEKNOLOJİ & ARAŞTIRMA
Author-email: Ömür Bera Işık <info@toretekno.com>
License: MIT
Project-URL: Homepage, https://github.com/toretekno/torelora
Project-URL: Repository, https://github.com/toretekno/torelora
Project-URL: Bug Tracker, https://github.com/toretekno/torelora/issues
Keywords: lora,fine-tuning,llm,transformers,peft,quantization,training,deep-learning,pytorch,torelora,asgp,corap20,ssyp,omnisync,bbgh
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Requires-Dist: transformers>=4.40.0
Requires-Dist: accelerate>=0.27.0
Requires-Dist: peft>=0.10.0
Requires-Dist: trl>=0.8.0
Requires-Dist: datasets>=2.18.0
Requires-Dist: safetensors>=0.4.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: psutil>=5.9.0
Provides-Extra: full
Requires-Dist: bitsandbytes>=0.43.0; extra == "full"
Requires-Dist: sentencepiece>=0.1.99; extra == "full"
Requires-Dist: triton; extra == "full"
Requires-Dist: flash-attn; extra == "full"
Requires-Dist: deepspeed; extra == "full"
Requires-Dist: optuna; extra == "full"
Requires-Dist: wandb; extra == "full"
Requires-Dist: tensorboard; extra == "full"
Requires-Dist: matplotlib; extra == "full"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: black; extra == "dev"
Requires-Dist: isort; extra == "dev"
Requires-Dist: mypy; extra == "dev"
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-python

# TOREloRA v1.1.1 — "THE REAL AGI" 🌐⚡🧠

**TORE TEKNOLOJİ & ARAŞTIRMA — MIT Lisansı**  
*Ömür Bera Işık & TORE Teknoloji Ekibi*

---

## Kurulum

```bash
# Minimal (temel eğitim)
pip install torelora

# Tam kurulum (4-bit, Flash Attention, WandB, DeepSpeed…)
pip install torelora[full]
```

---

## En Basit Kullanım

```python
from torelora import TOREloRA

tl = TOREloRA("mistralai/Mistral-7B-v0.3")
model, tok = tl.load()
trainer = tl.get_trainer(dataset)
tl.train(trainer)
```

---

## SimpleConfig — c/true · c/false Sistemi

```python
from torelora import SimpleConfig, TOREloRA

cfg = SimpleConfig.parse("""
    model_name        meta-llama/Llama-3.2-1B
    ssyp              c/true
    omnisync          c/true
    bbgh              c/false
    corap20           c/false
    lora_r            32
    batch_size        4
    epochs            3
    flash_attention   c/true
""")

tl = TOREloRA(cfg)
model, tok = tl.load()
trainer = tl.get_trainer(my_dataset)
tl.train(trainer)
```

---

## Teknoloji Bileşenleri

| Bileşen | Açıklama |
|---------|----------|
| 🌊 **SSYP** | Simultaneous Synchronized Yield Pipeline — tüm tokenlar tek paket, sıfır CPU yükü |
| 🔮 **CORAP20** | Hibrit 2-Bit Dinamik Bilgi Ekosistemi — %42–75 bellek tasarrufu |
| ⚡ **ASGP v3** | Async Stream Grad Pipe — parametre başına farklı gürültü, INT8 ring buffer |
| 🧬 **ParamMesh** | Parametreler arası canlı bilgi takas ağı |
| 🛡️ **ZeroFault v2** | Otomatik grad onarım, spike yumuşatma, sarsılmaz kararlılık |
| 🌐 **OMNISYNC+** | Geometrik Rezonans · Holografik Veri Katlama · Asenkron Singularity |
| 🧠 **BBGH AGI** | Bilişsel sessizlik raporu · sentetik muhakeme · one-shot öğrenme |
| 🔬 **T4TrillionEngine** | T4 (16GB) üzerinde trilyon parametreli model, LRU katman önbelleği |

---

## Tüm Parametreler 20+ Takma İsimle

```python
tl = TOREloRA(
    model="llama3",           # veya: model_name, checkpoint, base…
    rank=32,                  # veya: lora_r, r, adapter_rank…
    lr=3e-4,                  # veya: learning_rate, step_size…
    batch=4,                  # veya: batch_size, bs, train_batch…
    epochs=3,                 # veya: num_epochs, passes…
    sgp=True,                 # ASGP v2/v3
    ssyp=True,                # Sıfır-overhead pipeline
    corap20=True,             # Hibrit 2-bit quantization
    omnisync=True,            # Geometrik rezonans hizalama
    bbgh=True,                # One-shot bellek motoru
    agi_core=True,            # bbgh + omnisync açar
    flash_attention=True,
    quantization="4bit",
)
```

---

## Bileşenleri Doğrudan Kullanma

### ASGP v2 — Gradyan Gürültü Borusu

```python
from torelora import ASGPv2
import torch.nn as nn

model = nn.Sequential(nn.Linear(128, 256), nn.ReLU(), nn.Linear(256, 64))
pipe  = ASGPv2(pipe_mb=512., noise=0.005, dist="normal",
               int8=True, async_bg=True, unique_per_param=True)
pipe.register_model(model)

# Eğitim döngüsünde:
pipe.inject(model)          # Gürültü enjekte et
# … optimizer.step() …
pipe.adapt(loss.item())     # Adaptif gürültü güncelle
pipe.stop()
```

### SSYP — Sıfır-Overhead Pipeline

```python
from torelora import SSYP

ssyp    = SSYP(tokenizer, max_length=2048, async_tokenize=True)
dataset = ssyp.prepare(my_raw_dataset)

# Trainer'a otomatik bağla:
ssyp.patch_trainer(trainer, model)
```

### CORAP20 — Hibrit 2-Bit Quantization

```python
from torelora import CORAP20Engine
import torch.nn as nn

model  = nn.Sequential(nn.Linear(512, 1024), nn.ReLU(), nn.Linear(1024, 256))
engine = CORAP20Engine(model, block_size=64, gkp_size_mb=4., holographic=True)
n      = engine.quantize_model()
stats  = engine.memory_stats()
print(f"{stats['orig_mb']:.1f}MB → {stats['corap_mb']:.1f}MB  (%{stats['saving_pct']:.0f} tasarruf)")
```

### BBGH — One-Shot Bellek Motoru

```python
from torelora import BBGHEngine

bbgh = BBGHEngine(embed_dim=512, max_cells=100_000)
bbgh.batch_learn(["Python 1991'de oluşturuldu.", "Yapay zeka insan zekasını taklit eder."])

hits   = bbgh.query("Python kim yazdı?", top_k=3)
result = bbgh.generate("yapay zeka nedir")
print(hits[0]["value"])
```

### StreamingCORAP20Loader — Dev Modeller için

```python
from torelora import StreamingCORAP20Loader

loader  = StreamingCORAP20Loader("/path/to/70B-model", block_size=64, gkp_size_mb=256.)
store   = loader.convert_stream()    # Asla tam belleğe gelmez
model, engine = loader.inject_into_model(model)
print(loader.stats)
```

---

## Komut Satırı

```bash
# Ortam kontrolü
torelora-check

# Tüm demolar
torelora

# Tek bileşen demo
torelora --demo asgp
torelora --demo ssyp
torelora --demo corap20
torelora --demo omnisync
torelora --demo bbgh
torelora --demo config

# Sürüm
torelora --version
```

---

## ZeroFault v2 — Otomatik Hata Yönetimi

```python
from torelora import ZeroFault

# Güvenli çalıştırma — hata olursa default döner
result = ZeroFault.run(risky_function, arg1, arg2, default=None)

# Grad spike onarımı
gnorm = ZeroFault.maybe_repair_gradients(model, prev_norm=prev_gnorm)

# Hata özeti
ZeroFault.summary()
ZeroFault.save("torelora_errors.json")
```

---

## Lisans

MIT Lisansı — © TORE TEKNOLOJİ & ARAŞTIRMA, Ömür Bera Işık
