Metadata-Version: 2.4
Name: zlynx
Version: 0.1.10
Summary: Zlynx is a lightweight, highly-customizable deep learning library built on top of JAX and Flax NNX
Author: Shinapri
Maintainer: Shinapri
License: Apache-2.0
Project-URL: Homepage, https://github.com/zlynx-ai/zlynx
Project-URL: Repository, https://github.com/zlynx-ai/zlynx
Project-URL: Issues, https://github.com/zlynx-ai/zlynx/issues
Project-URL: Documentation, https://zlynx-ai.github.io/zlynx
Keywords: jax,flax,deep-learning,machine-learning,neural-networks
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: datasets>=4.6.1
Requires-Dist: flax>=0.12.5
Requires-Dist: grain>=0.2.16
Requires-Dist: jax>=0.9.2
Requires-Dist: optax>=0.2.6
Requires-Dist: orbax>=0.1.9
Requires-Dist: safetensors>=0.7.0
Provides-Extra: dev
Requires-Dist: pytest>=9.0.2; extra == "dev"
Requires-Dist: twine>=6.0.0; extra == "dev"
Requires-Dist: build>=1.2.0; extra == "dev"
Provides-Extra: cpu
Provides-Extra: tpu
Requires-Dist: jax[tpu]>=0.9.2; extra == "tpu"
Provides-Extra: cuda
Requires-Dist: jax[cuda]>=0.9.2; extra == "cuda"
Provides-Extra: cuda12
Requires-Dist: jax[cuda12]>=0.9.2; extra == "cuda12"
Provides-Extra: cuda13
Requires-Dist: jax[cuda13]>=0.9.2; extra == "cuda13"
Provides-Extra: cuda12-local
Requires-Dist: jax[cuda12-local]>=0.9.2; extra == "cuda12-local"
Provides-Extra: cuda13-local
Requires-Dist: jax[cuda13-local]>=0.9.2; extra == "cuda13-local"
Dynamic: license-file

# Zlynx

[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/zlynx-ai/zlynx)

A lightweight, highly-customizable deep learning library built on **JAX** and **Flax NNX**. Designed for researchers and developers who want fine-grained control over model architectures, training loops, and distributed setups without the bloat of massive frameworks.

## Install

```bash
uv pip install zlynx
```

## Define & Load Models

```python
from zlynx import Z

class MyModel(Z): ...

# Load from HuggingFace
model = MyModel.load_hf("username/my-model", format="safetensors")

# Load from Kaggle
model = MyModel.load_kaggle("username/my-model", sharding="fsdp")

# Load from local checkpoint
model = MyModel.load("./checkpoint", key=jax.random.key(0))
```

## Built-in Llama

```python
from zlynx.models.llama import LlamaConfig, LlamaLanguageModel

config = LlamaConfig(vocab_size=32000, hidden_size=512, num_hidden_layers=2)
model = LlamaLanguageModel(config)

# Generate
output_ids = model.generate(input_ids, key=jax.random.key(0), max_new_tokens=128)
```

## Train

```python
from zlynx.trainer import Trainer, TrainerConfig

trainer = Trainer(
    model=model,
    loss_fn=loss_fn,
    train_dataset=dataset,
    config=TrainerConfig(
        batch_size=32,
        learning_rate=5e-5,
        num_epochs=3,
        sharding="auto",
    ),
)
trainer.train()
```

## PEFT (LoRA, DoRA, VeRA, LoHa, LoKr, AdaLoRA)

```python
from zlynx.modules.peft import apply_peft

model = apply_peft(model, method="lora", r=16, alpha=32, target_modules=["q_proj", "v_proj"])
```

## Save & Push

```python
model.save("./my-model", format="safetensors")
model.push_hf("username/my-model")
model.push_kaggle("username/my-model")
```

## Features

- **Checkpointing** — Orbax + SafeTensors, HuggingFace Hub & Kaggle integration
- **Training** — gradient accumulation, LR scheduling, multi-backend logging (W&B, TensorBoard)
- **Sharding** — auto, DDP, FSDP with one config change
- **PEFT** — 6 adapter methods via `apply_peft()`
- **GaLore** — gradient low-rank projection for memory-efficient full fine-tuning
- **Data** — Grain-based pipeline accepting lists, HF datasets, dicts, and iterables
- **Modules** — Attention (GQA/MQA), MLP (SwiGLU), RMSNorm, RoPE, KV Cache, DiT blocks

## Documentation

- [Getting Started](docs/tutorials/getting_started/01_installation.md)
- [MNIST Tutorial](docs/tutorials/mnist.md)
- [API Reference — Trainer](docs/api-references/trainer.md)
- [API Reference — Modules](docs/api-references/modules.md)
- [API Reference — Models](docs/api-references/models.md)
