Metadata-Version: 2.4
Name: OptiRoulette-Keras
Version: 0.1.2
Summary: Keras OptiRoulette meta-optimizer (PyTorch backend only)
Author: Stamatis Mastromichalakis
License-Expression: MIT
Project-URL: Homepage, https://github.com/MStamatis/OptiRoulette
Project-URL: Repository, https://github.com/MStamatis/OptiRoulette
Keywords: keras,pytorch,optimizer,meta-optimizer,deep-learning
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: keras>=3.0.0
Requires-Dist: torch>=2.1.0
Requires-Dist: numpy>=1.23.0
Requires-Dist: PyYAML>=6.0
Requires-Dist: pytorch-optimizer>=3.7.0
Provides-Extra: dev
Requires-Dist: build>=1.2.1; extra == "dev"
Requires-Dist: twine>=5.1.1; extra == "dev"
Requires-Dist: pytest>=8.2.0; extra == "dev"
Dynamic: license-file

# OptiRoulette-Keras (Torch Backend)

This package accompanies the paper "OptiRoulette Optimizer: A New Stochastic
Meta-Optimizer for up to 5.3x Faster Convergence".

A standalone, pip-installable Keras meta-optimizer that brings OptiRoulette's
training logic to Keras `compile`/`fit` flows:
- random optimizer switching
- warmup -> roulette phase handling
- optimizer pool with active/backup swapping
- compatibility-aware replacement
- learning-rate scaling rules when switching
- momentum/state transfer on swap

The default behavior is loaded from the bundled `optimized.yaml` profile.

## Research Highlights

Based on the current paper draft, OptiRoulette is a stochastic meta-optimizer
that combines:
- warmup optimizer locking
- randomized sampling from an active optimizer pool
- compatibility-aware LR scaling during optimizer transitions
- failure-aware pool replacement

Reported mean test accuracy vs a single-optimizer AdamW baseline:

| Dataset | AdamW | OptiRoulette | Delta |
|---|---:|---:|---:|
| CIFAR-100 | 0.6734 | 0.7656 | +9.22 pp |
| CIFAR-100-C | 0.2904 | 0.3355 | +4.52 pp |
| SVHN | 0.9667 | 0.9756 | +0.89 pp |
| Tiny ImageNet | 0.5669 | 0.6642 | +9.73 pp |
| Caltech-256 | 0.5946 | 0.6920 | +9.74 pp |

Additional paper-reported highlights:
- Target-hit reliability: in the reported 10-seed suites, OptiRoulette reaches
  key validation targets in 10/10 runs, while the AdamW baseline reaches none
  of those targets within budget.
- Faster time-to-target on shared milestones (example: Caltech-256 @ 0.59,
  25.7 vs 77.0 epochs), with budget-capped lower-bound speedups up to 5.3x for
  non-attained baseline targets.
- Paired-seed analysis is positive across datasets, except CIFAR-100-C test
  ROC-AUC, which is not statistically significant in the current 10-seed study.

Important: the metrics above are from the current OptiRoulette paper draft
(PyTorch experiment suite), not from a full Keras multi-seed validation.

## Status

`OptiRoulette-Keras` is currently an **experimental / pre-release** package.
It has not yet been validated through full scientific multi-seed experiment suites.

## Backend Requirement

You must use Keras with the torch backend:

```bash
export KERAS_BACKEND=torch
```

Set this before importing `keras`.

## Install

```bash
pip install OptiRoulette-Keras
```

## Examples

- [CIFAR-100 Keras demo notebook](../../examples/quick_cifar100_optiroulette_keras.ipynb)
- [Tiny-ImageNet Keras demo notebook](../../examples/quick_tiny_imagenet_optiroulette_keras.ipynb)
- Both notebooks use pure `keras.layers` model definitions.

## Quick Use (`compile`/`fit`)

```python
import os
os.environ["KERAS_BACKEND"] = "torch"

import keras
from optiroulette_keras import create_roulette_callback

model = keras.Sequential([
    keras.layers.Input(shape=(32, 32, 3)),
    keras.layers.Conv2D(32, 3, activation="relu"),
    keras.layers.GlobalAveragePooling2D(),
    keras.layers.Dense(100),
])

# If you do not pass arguments, bundled optimized.yaml defaults are used
# (optimizers, roulette, pool, LR rules, seed fallback).
# Advanced optimizers (e.g. ranger, adan) are resolved via pytorch-optimizer adapter.
controller, roulette_cb = create_roulette_callback()
# Note: this call is still required. Without it, there is no OptiRoulette switching.
# verbose levels:
# 0 -> no OptiRoulette output (default)
# 1 -> print only active optimizer
# 2 -> include all OptiRoulette metrics in fit/progbar logs

model.compile(
    optimizer=controller.active_optimizer,
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=["accuracy"],
)
model.fit(train_loader, validation_data=val_loader, epochs=10, callbacks=[roulette_cb])
```

Custom arguments example:

```python
controller, roulette_cb = create_roulette_callback(
    optimizer_specs={
        "adam": {"learning_rate": 1e-3},
        "adamw": {"learning_rate": 1e-3, "weight_decay": 0.01},
        "sgd": {"learning_rate": 5e-2, "momentum": 0.9, "nesterov": True},
    },
    roulette={"warmup_epochs": 0, "warmup_optimizer": None, "warmup_config": {}},
    switch_granularity="epoch",
)
```

If you do not pass these arguments, defaults from bundled `optimized.yaml` are used.

## API

```python
from optiroulette_keras import (
    OptiRoulette,
    OptiRouletteOptimizer,
    OptiRouletteCallback,
    create_roulette_callback,
    PoolConfig,
    ensure_torch_backend,
    get_default_config,
    get_default_seed,
    get_default_optimizer_specs,
    get_default_pool_setup,
    get_default_roulette_config,
)
```

## Package Layout

This folder is a standalone PyPI package inside the same repository:

- `packages/optiroulette-keras/pyproject.toml`
- `packages/optiroulette-keras/src/optiroulette_keras/*`
- `packages/optiroulette-keras/tests/*`

## Build

From this folder:

```bash
python -m pip install --upgrade build twine
python -m build
python -m twine check dist/*
```
