Metadata-Version: 2.4
Name: omg-hybridomga
Version: 1.0.1
Summary: OMG-hybridOMGa — Ultimate Hybrid LoRA Suite: LoRA, DoRA, QLoRA, LoRA+, rsLoRA, OMGa ve tam eğitim motoru
License: Apache-2.0
Project-URL: Homepage, https://github.com/your-username/omg-hybridomga
Project-URL: Repository, https://github.com/your-username/omg-hybridomga
Project-URL: Issues, https://github.com/your-username/omg-hybridomga/issues
Keywords: lora,dora,qlora,peft,fine-tuning,llm,transformers,pytorch,omga,low-rank-adaptation,hybrid-lora
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Requires-Dist: transformers>=4.40.0
Requires-Dist: accelerate>=0.27.0
Provides-Extra: full
Requires-Dist: bitsandbytes>=0.43.0; extra == "full"
Requires-Dist: peft>=0.10.0; extra == "full"
Requires-Dist: trl>=0.8.0; extra == "full"
Requires-Dist: datasets>=2.18.0; extra == "full"
Provides-Extra: triton
Requires-Dist: triton>=2.3.0; extra == "triton"
Provides-Extra: flash
Requires-Dist: flash-attn>=2.5.0; extra == "flash"
Provides-Extra: deepspeed
Requires-Dist: deepspeed>=0.14.0; extra == "deepspeed"
Provides-Extra: optuna
Requires-Dist: optuna>=3.6.0; extra == "optuna"
Provides-Extra: all
Requires-Dist: bitsandbytes>=0.43.0; extra == "all"
Requires-Dist: peft>=0.10.0; extra == "all"
Requires-Dist: trl>=0.8.0; extra == "all"
Requires-Dist: datasets>=2.18.0; extra == "all"
Requires-Dist: triton>=2.3.0; extra == "all"
Requires-Dist: deepspeed>=0.14.0; extra == "all"
Requires-Dist: optuna>=3.6.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: pytest-cov>=5.0; extra == "dev"
Requires-Dist: black>=24.0; extra == "dev"
Requires-Dist: ruff>=0.4.0; extra == "dev"
Requires-Dist: mypy>=1.10; extra == "dev"
Requires-Dist: twine>=5.0; extra == "dev"
Requires-Dist: build>=1.2; extra == "dev"
Dynamic: license-file

# OMG-hybridOMGa ⚡🧠🔮♻️

**Ultimate Hybrid LoRA Suite** — LoRA, DoRA, QLoRA, LoRA+, rsLoRA ve OMGa'nın tam eğitim motoruyla birleşimi.

[![PyPI version](https://badge.fury.io/py/omg-hybridomga.svg)](https://pypi.org/project/omg-hybridomga/)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![PyTorch 2.1+](https://img.shields.io/badge/PyTorch-2.1+-red.svg)](https://pytorch.org/)

---

## Kurulum

```bash
# Temel kurulum (sadece torch + transformers + accelerate)
pip install omg-hybridomga

# Tam kurulum (bitsandbytes, peft, trl, datasets dahil)
pip install "omg-hybridomga[full]"

# Her şey dahil
pip install "omg-hybridomga[all]"
```

---

## Özellikler

### LoRA Katmanları
| Yöntem | Açıklama | Kaynak |
|--------|----------|--------|
| **LoRA** | Standard Low-Rank Adaptation | Hu et al., 2021 |
| **DoRA** | Weight-Decomposed LoRA | Liu et al., 2024 |
| **QLoRA** | 4/8-bit quantized base weights | Dettmers et al., 2023 |
| **LoRA+** | Separate LRs for A and B matrices | Hayou et al., 2024 |
| **rsLoRA** | Rank-Stabilized LoRA | Kalajdzievski, 2023 |
| **OMGa ★** | OMG Adaptive LoRA — per-token gate, dual-rank | — |

### Bellek & VRAM Yönetimi
- `MemoryMonitor` — daemon thread VRAM watchdog
- `VRAMGuard` — otomatik batch küçültme + akıllı kurtarma
- `MorphicMemory™` — Markov tahmini + tensor reincarnation
- 2-bit NF2 quantization (bitsandbytes gerektirmez)

### Hız & Derleme
- `Accelerator` — grad accum, AMP, clip, fused optimizer
- `CrystalCore™` — runtime kernel kristalizasyonu
- `TritonKernels` — RMSNorm, SwiGLU Triton fallback
- `torch.compile` — fullgraph + cache desteği

### Optimizer & Scheduler
- `EMA` / `SWA` — Exponential/Stochastic Weight Averaging
- `SpectraOptimizer™` — frekans-domain adaptive optimizer
- `ResonanceScheduler™` — gradient-spectrum self-tuning LR
- `WarmupCosineScheduler` / `WarmupLinearScheduler`

### Gradient İyileştirme
- `GradientHarmonics™` — wavelet gradient processing
- `NeuralProfiler™` — LSTM-based OOM/explode prediction
- `LossSpikeDetector` — spike tespiti + LR müdahalesi

---

## Hızlı Başlangıç

### Sadece LoRA Katmanı

```python
from omg_hybridomga import HybridConfig, apply_hybrid_lora

cfg = HybridConfig(method="omga", rank=16, alpha=32)
model = apply_hybrid_lora(model, cfg)
```

### Tam Eğitim Motoru

```python
from omg_hybridomga import HybridOMGa

engine = HybridOMGa(
    "meta-llama/Llama-3.2-3B",
    rank=16,
    method="omga",
    ema=True,
    swa=True,
    crystal_core=True,
    priority_prop=True,
)

model, tokenizer = engine.load()
trainer = engine.get_trainer(dataset)
engine.train(trainer)
```

### LoRA Kaydet / Yükle

```python
from omg_hybridomga import save_hybrid_lora, load_hybrid_lora

save_hybrid_lora(model, "./my_lora_weights")
model = load_hybrid_lora(model, "./my_lora_weights")
```

### Ortam Kontrolü

```python
from omg_hybridomga import check_environment
check_environment()
```

---

## Opsiyonel Bağımlılıklar

| Grup | Kurulum | İçerik |
|------|---------|--------|
| `full` | `pip install "omg-hybridomga[full]"` | bitsandbytes, peft, trl, datasets |
| `triton` | `pip install "omg-hybridomga[triton]"` | Triton kernel desteği |
| `flash` | `pip install "omg-hybridomga[flash]"` | Flash Attention 2 |
| `deepspeed` | `pip install "omg-hybridomga[deepspeed]"` | DeepSpeed ZeRO |
| `all` | `pip install "omg-hybridomga[all]"` | Hepsi |

---

## Gereksinimler

- Python ≥ 3.9
- PyTorch ≥ 2.1.0
- transformers ≥ 4.40.0
- accelerate ≥ 0.27.0

---

## Lisans

Apache 2.0 — Ayrıntılar için [LICENSE](LICENSE) dosyasına bakın.
