Metadata-Version: 2.4
Name: adaptera
Version: 0.1.1
Summary: A local-first LLM orchestration library
Author: Sylo
License-Expression: MIT
Keywords: llm,orchestration,local-first,ai,machine-learning
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch>=2.0.0
Requires-Dist: transformers>=4.40.0
Requires-Dist: peft>=0.10.0
Requires-Dist: accelerate>=0.29.0
Requires-Dist: bitsandbytes>=0.43.0
Requires-Dist: numpy>=1.24.0
Requires-Dist: faiss-cpu>=1.8.0
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: build; extra == "dev"
Requires-Dist: twine; extra == "dev"
Dynamic: license-file

# Adaptera 🌌

A local-first LLM orchestration library with native support for Hugging Face, PEFT/LoRA, QLoRA, and API models — without hiding the model.

---
> **Note:** This project is in its early development phase and may undergo significant changes. However, the core goal of providing local LLM processing will remain consistent. Once the agentic part of the module is stable, we will work on making a fine-tuner for it so that this library can be used as a quick way of prototyping local agentic models.
> 
> Feel free to contribute, please do not spam pull requests. Any and all help is deeply appreciated.
---
## Features

- **Local-First**: Built for running LLMs on your own hardware efficiently.
- **Native PEFT/QLoRA**: Seamless integration with Hugging Face's PEFT for efficient model loading.
- **Persistent Memory**: Vector-based memory using FAISS with automatic text embedding (SLM).
- **Strict ReAct Agents**: Deterministic agent loops using JSON-based tool calls.
- **Model Transparency**: Easy access to the underlying Hugging Face model and tokenizer.

## Installation

### Using python
```bash
pip install adaptera
```

### Using Anaconda/Miniforge
```bash
conda activate < ENV NAME >
pip install adaptera
```

*(Note: Requires Python 3.12+)*

## Quick Start

```python
from adaptera import Agent, AdapteraModel, VectorDB, Tool

# 1. Initialize Vector Memory
db = VectorDB(index_file="memory.index")

# 2. Load a Model (with 4-bit quantization)
model = AdapteraModel(
    model_name="unsloth/Llama-3.2-3B-Instruct",
    quantization="4bit",
    vector_db=db
)

# 3. Define Tools
def add(a, b):
    """Adds two numbers together"""
    return a + b

tools = [
    Tool(name="add", func=add, description="Adds two numbers together. Input: 'a,b'")
]

# 4. Create and Run Agent
agent = Agent(model, tools=tools)
print(agent.run("What is 15 + 27?"))
```

## Project Structure

- `adaptera/chains/`: Agentic workflows and ReAct implementations.
- `adaptera/model/`: Hugging Face model loading and generation wrappers.
- `adaptera/memory/`: FAISS-backed persistent vector storage.
- `adaptera/tools/`: Tool registry and definition system.

## Non-goals

This library does not aim to be a full ML framework or replace existing tools like LangChain. It focuses on providing a clean, minimal interface for local-first LLM orchestration.
