Metadata-Version: 2.4
Name: auto-lora
Version: 0.2.1
Summary: Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning
Author: shrey1720
License: MIT
Project-URL: Homepage, https://github.com/shrey1720/auto-lora
Project-URL: Repository, https://github.com/shrey1720/auto-lora
Keywords: llm,lora,fine-tuning,hyperparameter-optimization,peft
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.1.0
Requires-Dist: transformers>=4.38.0
Requires-Dist: peft>=0.8.0
Requires-Dist: datasets>=2.16.0
Requires-Dist: accelerate>=0.26.0
Requires-Dist: bitsandbytes>=0.42.0
Requires-Dist: trl>=0.7.0
Requires-Dist: sentencepiece>=0.1.99
Requires-Dist: protobuf>=3.20.0
Requires-Dist: optuna>=3.5.0
Requires-Dist: typer[all]>=0.9.0
Requires-Dist: rich>=13.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: matplotlib>=3.8.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: psutil>=5.9.0
Requires-Dist: pynvml>=11.5.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: unsloth
Requires-Dist: rouge-score
Requires-Dist: nltk
Requires-Dist: py-cpuinfo
Requires-Dist: tabulate
Requires-Dist: sentence-transformers
Requires-Dist: sacrebleu
Requires-Dist: scikit-learn
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
Requires-Dist: ruff>=0.2.0; extra == "dev"
Provides-Extra: wandb
Requires-Dist: wandb>=0.16.0; extra == "wandb"
Dynamic: license-file

# 🚀 Auto-LoRA

> **The Automated Hyperparameter Optimization Platform for Efficient LLM Fine-Tuning.**

[![PyPI version](https://img.shields.io/pypi/v/auto-lora.svg)](https://pypi.org/project/auto-lora/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/release/python-3100/)

Auto-LoRA is a powerful, scientific framework designed to take the guesswork out of Large Language Model (LLM) fine-tuning. By combining **Bayesian Optimization** (via Optuna) with **High-Performance Training Engines** (via Unsloth and PEFT), Auto-LoRA automatically identifies the optimal LoRA (Low-Rank Adaptation) configurations for your specific dataset and hardware constraints.

---

## 🌟 Key Features

### 🎯 Intelligent Hyperparameter Tuning
Stop guessing ranks and learning rates. Auto-LoRA uses Optuna to search for the best combination of:
- **LoRA Rank (r)** and **Alpha**
- **Learning Rate** and **Scheduler**
- **Dropout Rates**
- **Target Modules**

### ⚡ Unsloth Integration
Built-in support for **Unsloth**, providing:
- **2x–5x faster** training speeds.
- **70% less VRAM** usage.
- Automatic fallback to standard PEFT if hardware is incompatible.

### 📊 Scientific Metric Suite
Move beyond simple loss curves. Auto-LoRA generates journal-grade reports including:
- **NLP Quality**: ROUGE-L, BLEU, and Semantic Similarity (via Sentence-Transformers).
- **Inference Efficiency**: Tokens Per Second (TPS), Latency (ms).
- **Hardware Profile**: Peak VRAM usage, System VRAM efficiency.

### 📈 Dynamic Visualization
Generate stunning HTML dashboards and publication-quality Matplotlib charts with a single command.

---

## 🚀 Quick Start

### Installation

**Standard Installation (Recommended)**
```bash
pip install auto-lora
```

**From Source (For Developers)**
```bash
git clone https://github.com/shrey1720/auto-lora.git
cd auto-lora
pip install -e ".[dev]"
```

**Recommended for NVIDIA GPUs**
```bash
pip install unsloth xformers
```

---

## 🛠 Usage Guide

### 1. System Health Check
Ensure your GPU and VRAM are ready for training.
```bash
auto-lora doctor
```

### 2. Basic Training
Train with default settings and automatic tuning.
```bash
auto-lora train --model "meta-llama/Llama-3.2-1B" --data "my_dataset.json" --max-trials 5
```

### 3. Using Expert Presets
Auto-LoRA comes with pre-configured settings for specific domains:
- `--preset chatbot`: Optimized for conversational flow.
- `--preset coding`: Lower learning rate, optimized for logic.
- `--preset summarization`: Focuses on context retention.

### 4. Scientific Benchmarking
Compare your trained adapter against ground truth answers to get a technical profile.
```bash
auto-lora benchmark --run <run_id> --references test_set.json
```

---

## 📂 Project Architecture

The system is modularly designed for extensibility:

```text
auto_lora/
├── tuner/        # Bayesian optimization and search spaces
├── trainer/      # LoRA/QLoRA engine (Unsloth & PEFT)
├── dataset/      # Dynamic loading and scientific validation
├── hardware/     # VRAM analysis and hardware-aware strategy
├── metrics/      # Scorer engine (NLP & Performance)
├── reports/      # HTML Exporters and Chart Generators
├── db/           # SQLite persistence for all runs/trials
└── cli/          # Typer-powered command interface
```

---

## 🔬 Technical Roadmap

- [ ] **Multi-GPU Support**: DDP and FSDP integration.
- [ ] **DPO Tuning**: Direct Preference Optimization tuning loop.
- [ ] **Custom Scoring Functions**: Allow users to define their own success metrics.
- [ ] **HuggingFace Hub Integration**: Direct upload of tuned adapters.

---

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.


