Metadata-Version: 2.4
Name: nexustrain
Version: 1.0.0
Summary: The world's first self-crystallizing neural training acceleration engine
Author-email: Ömür Bera Işık <fastloraoffical@gmail.com>
License: MIT License
        
        Copyright (c) 2025 Ömür Bera Işık
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/fastloraoffical/nexustrain
Project-URL: Repository, https://github.com/fastloraoffical/nexustrain
Project-URL: Issues, https://github.com/fastloraoffical/nexustrain/issues
Project-URL: Changelog, https://github.com/fastloraoffical/nexustrain/blob/main/CHANGELOG.md
Keywords: deep-learning,machine-learning,training-acceleration,llm,fine-tuning,lora,pytorch,transformers,crystal-core,gradient-harmonics,spectra-optimizer
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Operating System :: OS Independent
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Provides-Extra: full
Requires-Dist: transformers>=4.40.0; extra == "full"
Requires-Dist: accelerate>=0.27.0; extra == "full"
Requires-Dist: peft>=0.10.0; extra == "full"
Requires-Dist: trl>=0.8.0; extra == "full"
Requires-Dist: bitsandbytes>=0.43.0; extra == "full"
Requires-Dist: datasets>=2.14.0; extra == "full"
Provides-Extra: triton
Requires-Dist: triton>=2.1.0; extra == "triton"
Provides-Extra: speed
Requires-Dist: flash-attn>=2.5.0; extra == "speed"
Requires-Dist: triton>=2.1.0; extra == "speed"
Provides-Extra: distributed
Requires-Dist: deepspeed>=0.14.0; extra == "distributed"
Requires-Dist: accelerate>=0.27.0; extra == "distributed"
Provides-Extra: tuning
Requires-Dist: optuna>=3.5.0; extra == "tuning"
Provides-Extra: logging
Requires-Dist: wandb>=0.16.0; extra == "logging"
Requires-Dist: tensorboard>=2.14.0; extra == "logging"
Provides-Extra: all
Requires-Dist: nexustrain[distributed,full,logging,speed,triton,tuning]; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: isort>=5.12.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Dynamic: license-file

# NexusTrain ⚡

**The World's First Self-Crystallizing Neural Training Engine**

> Yazar: **Ömür Bera Işık** | Lisans: MIT | Python 3.9+

```bash
pip install nexustrain
pip install nexustrain[full]   # + HuggingFace ekosistemi
pip install nexustrain[all]    # her şey
```

---

## 10 Özgün Teknoloji

| Modül | Ne yapar? |
|---|---|
| **CrystalCore™** | Sık çalışan operasyonları runtime'da otomatik kristalize eder (torch.compile) |
| **MorphicMemory™** | Markov zinciriyle allocation'ları tahmin eder, ölü tensor'ları yeniden doğurur |
| **SpectraOptimizer™** | Gradient geçmişine FFT uygular, rezonant frekansları söndürür |
| **ResonanceScheduler™** | Gradient spektrum entropisiyle eğitim fazını otomatik tespit eder |
| **ChromaticPrecision™** | Her katmana gradient varyansına göre ayrı dtype atar (FP32/BF16/FP16) |
| **GradientHarmonics™** | Haar wavelet ile gradient'i ayrıştırır, frekans alanında gürültü enjekte eder |
| **NeuralProfiler™** | Küçük LSTM ile OOM ve gradient patlamasını adım öncesi tahmin eder |
| **CrystalPipeline™** | VRAM'e göre gradient accumulation'ı dinamik ayarlar, async checkpoint |
| **ZeroWaste™** | Atık operasyonları ve sıfır gradient alan parametreleri tespit eder |
| **UniversalAdapter™** | HF/TRL/PEFT/DeepSpeed/Accelerate/vanilla PyTorch ile otomatik uyum |

---

## Hızlı Başlangıç

### HuggingFace / TRL ile

```python
from nexustrain import NexusTrain, NexusConfig
from trl import SFTTrainer, SFTConfig

# Model yükle
from transformers import AutoModelForCausalLM, AutoTokenizer
model     = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")

# NexusTrain konfigürasyonu
cfg = NexusConfig(power=1.0)
nt  = NexusTrain(model, tokenizer, cfg)

# Trainer oluştur ve bağla
trainer = SFTTrainer(model=model, tokenizer=tokenizer,
                     train_dataset=dataset, args=SFTConfig(...))
trainer = nt.patch_trainer(trainer)
nt.engage()

trainer.train()
nt.summary()
```

### Tek satır sarmalayıcı

```python
from nexustrain import nexus_wrap

nt, model = nexus_wrap(model, tokenizer, power=1.0)
trainer   = nt.patch_trainer(trainer)
nt.engage()
```

### Sadece SpectraOptimizer

```python
from nexustrain import SpectraOptimizer

optimizer = SpectraOptimizer(
    model.parameters(),
    lr               = 2e-4,
    spectral_damping = 0.1,
)
```

### Sadece ResonanceScheduler

```python
from nexustrain import SpectraOptimizer, ResonanceScheduler, NexusConfig

opt   = SpectraOptimizer(model.parameters(), lr=2e-4)
cfg   = NexusConfig(base_lr=2e-4, warmup_steps=100)
sched = ResonanceScheduler(opt, cfg)

# Her adımda
sched.step(loss=loss.item(), grad_norm=gnorm)
```

---

## Konfigürasyon

```python
cfg = NexusConfig(
    power               = 1.0,    # 0.0–1.0 global güç

    # Hız
    crystal_core        = True,
    morphic_memory      = True,
    crystal_pipeline    = True,
    zero_waste          = True,

    # Zeka
    spectra_optimizer   = True,
    resonance_scheduler = True,
    chromatic_precision = True,
    gradient_harmonics  = True,
    harmonic_noise      = 0.001,  # genelleme için
    harmonic_compress   = 0.0,    # dağıtık eğitim için

    # İzleme
    neural_profiler     = True,
    profiler_alert_ms   = 500.0,

    # LR
    base_lr             = 2e-4,
    min_lr              = 1e-6,
    warmup_steps        = 100,
    resonance_sensitivity = 0.5,

    # Genel
    device              = "auto",   # cuda/cpu otomatik
    dtype               = "auto",   # bf16/fp16/fp32 otomatik
    seed                = 42,
    output_dir          = "./nexus_output",
)
```

---

## Ortam Kontrolü

```bash
nexustrain-check
```

veya Python'dan:

```python
from nexustrain import nexus_check
nexus_check()
```

---

## Kurulum Seçenekleri

```bash
pip install nexustrain              # Sadece PyTorch gerekli
pip install nexustrain[full]        # + transformers, peft, trl, bitsandbytes
pip install nexustrain[speed]       # + flash-attn, triton
pip install nexustrain[distributed] # + deepspeed, accelerate
pip install nexustrain[tuning]      # + optuna
pip install nexustrain[logging]     # + wandb, tensorboard
pip install nexustrain[all]         # her şey
pip install nexustrain[dev]         # geliştirici araçları
```

---

## Test

```bash
pip install nexustrain[dev]
pytest tests/ -v
```

---

## Lisans

MIT © 2025 Ömür Bera Işık — [github.com/fastloraoffical](https://github.com/fastloraoffical)
