Metadata-Version: 2.4
Name: nexlora
Version: 0.2.0
Summary: Next-generation LLM fine-tuning engine with self-optimizing training
Author-email: Ömür Bera Işık <gamegameromur@gmail.com>
License-Expression: MIT
Project-URL: Homepage, https://github.com/gamegameromur-a11y/nexlora
Project-URL: Repository, https://github.com/gamegameromur-a11y/nexlora
Project-URL: Issues, https://github.com/gamegameromur-a11y/nexlora/issues
Keywords: lora,qlora,fine-tuning,llm,transformers,peft,quantization,training,pytorch,ai,deep-learning
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Operating System :: OS Independent
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Provides-Extra: full
Requires-Dist: peft>=0.10.0; extra == "full"
Requires-Dist: bitsandbytes>=0.43.0; extra == "full"
Requires-Dist: trl>=0.8.0; extra == "full"
Requires-Dist: datasets>=2.18.0; extra == "full"
Requires-Dist: transformers>=4.40.0; extra == "full"
Requires-Dist: accelerate>=0.27.0; extra == "full"
Requires-Dist: optuna; extra == "full"
Provides-Extra: logging
Requires-Dist: wandb; extra == "logging"
Requires-Dist: tensorboard; extra == "logging"
Provides-Extra: flash
Requires-Dist: flash-attn>=2.5.0; extra == "flash"
Provides-Extra: distributed
Requires-Dist: deepspeed; extra == "distributed"
Provides-Extra: triton
Requires-Dist: triton; extra == "triton"
Provides-Extra: all
Requires-Dist: peft>=0.10.0; extra == "all"
Requires-Dist: bitsandbytes>=0.43.0; extra == "all"
Requires-Dist: trl>=0.8.0; extra == "all"
Requires-Dist: datasets>=2.18.0; extra == "all"
Requires-Dist: wandb; extra == "all"
Requires-Dist: tensorboard; extra == "all"
Requires-Dist: deepspeed; extra == "all"
Requires-Dist: triton; extra == "all"
Requires-Dist: transformers>=4.40.0; extra == "all"
Requires-Dist: accelerate>=0.27.0; extra == "all"
Requires-Dist: optuna; extra == "all"
Dynamic: license-file

# NexLoRA

Merged package combining **FastLoRA v4.0** and **NexusTrain v1.0** into a single installable package.

**Authors:**
- FastLoRA: FastLoRA Contributors
- NexusTrain: Ömür Bera Işık

## Installation

```bash
pip install nexlora
```

## Usage

### FastLoRA Usage

```python
from nexlora import fastlora

model, tokenizer = fastlora.FastLoRA("meta-llama/Llama-3.2-3B").load()
# ... training ...
```

### NexusTrain Usage

```python
from nexlora import nexustrain

nt = nexustrain.NexusTrain(model, tokenizer, nexustrain.NexusConfig())
trainer = nt.patch_trainer(trainer)
nt.engage()
# ... training ...
```

### Environment Check

```bash
nexlora-check
```

## Features

### FastLoRA (v4.0)
- LoRA / QLoRA fine-tuning
- 2-bit quantization (independent of bitsandbytes)
- OOM recovery with auto-resume
- Custom Triton kernels (RMSNorm, SwiGLU, RoPE)
- Mixture of Experts (MoE) support
- Optuna hyperparameter tuning
- CPU/NVMe offloading (ZeRO-Infinity style)
- Smart VRAM guard

### NexusTrain (v1.0)
- CrystalCore™ - Runtime kernel crystallization
- MorphicMemory™ - Adaptive memory morphism
- SpectraOptimizer™ - Frequency domain optimizer
- ResonanceScheduler™ - Self-tuning learning rate
- ChromaticPrecision™ - Layer-wise dynamic precision
- GradientHarmonics™ - Wavelet-based gradient processing
- NeuralProfiler™ - LSTM-based training predictor
- CrystalPipeline™ - Self-reconfiguring training pipeline
- ZeroWaste™ - Unnecessary computation elimination

## License

MIT License - See LICENSE file for details.
