Metadata-Version: 2.4
Name: lmxlab
Version: 0.4.0
Summary: Transformer language models on Apple Silicon with MLX
Project-URL: Homepage, https://github.com/michaelellis003/lmxlab
Project-URL: Repository, https://github.com/michaelellis003/lmxlab
Project-URL: Documentation, https://michaelellis003.github.io/lmxlab/
Project-URL: Issues, https://github.com/michaelellis003/lmxlab/issues
Project-URL: Changelog, https://github.com/michaelellis003/lmxlab/blob/main/CHANGELOG.md
Author: Michael Ellis
License-Expression: MIT
License-File: LICENSE
Keywords: apple-silicon,deep-learning,language-models,mlx,transformers
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Requires-Dist: mlx>=0.25
Requires-Dist: numpy
Requires-Dist: safetensors
Requires-Dist: scipy>=1.17.1
Requires-Dist: tiktoken>=0.12.0
Provides-Extra: dev
Requires-Dist: mkdocs-material; extra == 'dev'
Requires-Dist: mkdocstrings[python]; extra == 'dev'
Requires-Dist: mypy>=1.11; extra == 'dev'
Requires-Dist: pre-commit>=4.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.7; extra == 'dev'
Provides-Extra: hf
Requires-Dist: datasets; extra == 'hf'
Requires-Dist: huggingface-hub; extra == 'hf'
Requires-Dist: transformers; extra == 'hf'
Provides-Extra: plotting
Requires-Dist: matplotlib; extra == 'plotting'
Provides-Extra: tokenizers
Requires-Dist: tiktoken; extra == 'tokenizers'
Provides-Extra: tracking
Requires-Dist: mlflow; extra == 'tracking'
Description-Content-Type: text/markdown

# lmxlab

Transformer language models on Apple Silicon, built with [MLX](https://ml-explore.github.io/mlx/).

[![CI](https://github.com/michaelellis003/lmxlab/actions/workflows/ci.yml/badge.svg)](https://github.com/michaelellis003/lmxlab/actions/workflows/ci.yml)
[![Docs](https://github.com/michaelellis003/lmxlab/actions/workflows/docs.yml/badge.svg)](https://michaelellis003.github.io/lmxlab/)
[![PyPI](https://img.shields.io/pypi/v/lmxlab)](https://pypi.org/project/lmxlab/)
[![Python](https://img.shields.io/pypi/pyversions/lmxlab)](https://pypi.org/project/lmxlab/)
[![License](https://img.shields.io/github/license/michaelellis003/lmxlab)](LICENSE)

## Install

```bash
pip install lmxlab
```

Requires Python 3.12+ and Apple Silicon (M1+). MLX runs on Intel/Linux too, but CPU-only.

## Usage

```python
import mlx.core as mx
from lmxlab.models.llama import llama_config
from lmxlab.models.base import LanguageModel

config = llama_config(vocab_size=32000, d_model=512, n_heads=8, n_kv_heads=4, n_layers=6)
model = LanguageModel(config)
mx.eval(model.parameters())

tokens = mx.array([[1, 234, 567]])
logits, caches = model(tokens)
```

Architecture variants (GPT, LLaMA, DeepSeek, Gemma, Qwen, Mixtral, etc.) are config factories — same `LanguageModel` class, different settings.

## CLI

```bash
lmxlab list                    # Show available architectures
lmxlab info llama --tiny       # Config details
lmxlab count deepseek --detail # Parameter breakdown
```

## Docs

Full API docs at [michaelellis003.github.io/lmxlab](https://michaelellis003.github.io/lmxlab/).

## Development

```bash
git clone https://github.com/michaelellis003/lmxlab.git
cd lmxlab
uv sync --extra dev
uv run pre-commit install
uv run pytest
```

## License

MIT
