Metadata-Version: 2.4
Name: nexuslora
Version: 1.0.0
Summary: NexusLoRA — Unified LLM Fine-Tuning Engine (FastLoRA × NexusTrain)
Author-email: Ömür Bera Işık <fastloraoffical@gmail.com>
Project-URL: Homepage, https://github.com/fastloraoffical/nexuslora
Project-URL: Repository, https://github.com/fastloraoffical/nexuslora
Project-URL: Issues, https://github.com/fastloraoffical/nexuslora/issues
Keywords: llm,lora,fine-tuning,transformers,peft,qlora,nexus,training
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: torch>=2.1.0
Requires-Dist: transformers>=4.40.0
Requires-Dist: accelerate>=0.27.0
Provides-Extra: full
Requires-Dist: torch>=2.1.0; extra == "full"
Requires-Dist: transformers>=4.40.0; extra == "full"
Requires-Dist: accelerate>=0.27.0; extra == "full"
Requires-Dist: bitsandbytes>=0.43.0; extra == "full"
Requires-Dist: peft>=0.10.0; extra == "full"
Requires-Dist: trl>=0.8.0; extra == "full"
Requires-Dist: datasets>=2.18.0; extra == "full"
Requires-Dist: triton>=2.1.0; extra == "full"
Requires-Dist: deepspeed>=0.14.0; extra == "full"
Requires-Dist: optuna>=3.5.0; extra == "full"
Requires-Dist: flash-attn>=2.5.0; extra == "full"
Provides-Extra: train
Requires-Dist: bitsandbytes>=0.43.0; extra == "train"
Requires-Dist: peft>=0.10.0; extra == "train"
Requires-Dist: trl>=0.8.0; extra == "train"
Requires-Dist: datasets>=2.18.0; extra == "train"
Requires-Dist: optuna>=3.5.0; extra == "train"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: black; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: mypy; extra == "dev"

# NexusLoRA ⚡🛡️🧠🔮

**Unified LLM Fine-Tuning Engine** — FastLoRA v4 + NexusTrain v1 tek dosyada birleştirilmiş hali.

---

## Kurulum

```bash
# Temel kurulum
pip install nexuslora

# Eğitim araçlarıyla birlikte (önerilen)
pip install nexuslora[train]

# Tam kurulum (flash-attn dahil)
pip install nexuslora[full]

# Geliştirici kurulumu
git clone https://github.com/fastloraoffical/nexuslora
cd nexuslora
pip install -e ".[dev]"
```

---

## Hızlı Başlangıç

```python
from nexuslora import NexusLoRA

# Tam özellikli — tüm NexusLoRA modülleri aktif
nl = NexusLoRA("meta-llama/Llama-3.2-3B",
               nexus_enabled=True, nexus_power=1.0)
model, tokenizer = nl.load()
trainer = nl.get_trainer(train_dataset)
nl.train(trainer)
```

### Sadece temel LoRA (NexusTrain modülleri kapalı)

```python
nl = NexusLoRA("meta-llama/Llama-3.2-3B", nexus_enabled=False)
model, tokenizer = nl.load()
trainer = nl.get_trainer(train_dataset)
nl.train(trainer)
```

### Seçici modül aktivasyonu

```python
nl = NexusLoRA(
    "meta-llama/Llama-3.2-3B",
    nexus_enabled=True,
    nexus_power=1.0,
    # Çakışma önleme:
    torch_compile=False,             # CrystalCore™ aktif olacak
    nexus_crystal_core=True,
    mixed_precision_optimize=False,  # ChromaticPrecision™ aktif olacak
    nexus_chromatic_precision=True,
    lr_scheduler="constant",         # ResonanceScheduler™ aktif olacak
)
```

---

## Özellikler

### FastLoRA Çekirdeği
| Özellik | Açıklama |
|---|---|
| Custom Triton Kernels | RMSNorm, SwiGLU, RoPE fused implementasyonlar |
| 2-bit / 4-bit / 8-bit Quant | bitsandbytes bağımsız 2-bit dahil |
| Mixture of Experts | Sparse MoE router + otomatik patch |
| CPU/NVMe Offloading | ZeRO-Infinity tarzı bellek yönetimi |
| OOM Recovery | Eğitimi kesmeden OOM'dan devam |
| Optuna AutoTune | lr / rank / batch otomatik optimizasyonu |
| UnstoppableTrainer | Her hatadan otomatik kurtarma |

### NexusTrain Modülleri
| Modül | Açıklama |
|---|---|
| CrystalCore™ | Runtime kernel kristalizasyonu |
| MorphicMemory™ | Markov tahminli tensor yeniden kullanımı |
| SpectraOptimizer™ | FFT tabanlı AdamW üstü optimizer |
| ResonanceScheduler™ | Gradient spektrumundan öz-ayarlı LR |
| ChromaticPrecision™ | Per-layer dinamik dtype ataması |
| GradientHarmonics™ | Wavelet tabanlı gradient işleme |
| NeuralProfiler™ | LSTM ile OOM/explode tahmini |
| CrystalPipeline™ | Dinamik grad-accum + async checkpoint |
| ZeroWaste™ | Ölü parametre eliminasyonu |
| UniversalAdapter™ | HF/TRL/PEFT/DeepSpeed otomatik patch |

---

## Çakışma Uyarıları

| FastLoRA | NexusTrain | Öneri |
|---|---|---|
| `torch_compile=True` | `nexus_crystal_core=True` | Birini kapatın |
| `paged_optimizer=True` | `nexus_spectra_optimizer=True` | Küçük VRAM'de spectra'yı kapatın |
| `mixed_precision_optimize=True` | `nexus_chromatic_precision=True` | Birini kapatın |
| `dynamic_batch_scaling=True` | `nexus_crystal_pipeline=True` | Birini kapatın |
| `smart_checkpoint=True` | `nexus_crystal_pipeline=True` | Birini kapatın |
| `loss_spike_detection=True` | `nexus_neural_profiler=True` | Birini kapatın |
| `gradient_noise_monitor=True` | `nexus_gradient_harmonics=True` | Monitor yanıltıcı olur |
| `mem_defrag=True` | `nexus_morphic_memory=True` | Zararsız; defrag interval 2x artırın |

---

## API Referansı

```python
nl = NexusLoRA(model_name, **kwargs)   # Konfigürasyon
model, tokenizer = nl.load()           # Model yükle + tüm optimizasyonlar
trainer = nl.get_trainer(dataset)      # Trainer oluştur
nl.train(trainer)                      # Eğit
nl.save("./output")                    # Kaydet
nl.push_to_hub("kullanici/model")      # HuggingFace Hub
nl.merge_and_unload()                  # LoRA merge
response = nl.generate("prompt")       # Inference
nl.nexus_async_checkpoint()            # Async checkpoint
nl.profile()                           # Benchmark
nl.stop()                              # Temiz kapatma
```

---

## Gereksinimler

- Python ≥ 3.9
- PyTorch ≥ 2.1.0
- Transformers ≥ 4.40.0
- Accelerate ≥ 0.27.0
