Metadata-Version: 2.4
Name: tensor-ml
Version: 0.2.0
Summary: A library that provides a rich set of tools for tensor analysis, multi-linear algebra, tensor regression and multi-dimensional sparse signal representations.
Project-URL: Homepage, https://github.com/imaduranga/tensor-ml
Project-URL: Repository, https://github.com/imaduranga/tensor-ml
Project-URL: Issues, https://github.com/imaduranga/tensor-ml/issues
Project-URL: Changelog, https://github.com/imaduranga/tensor-ml/blob/master/CHANGELOG.md
Author: Ishan Wickramasingha
License: MIT
License-File: LICENSE
Keywords: LARS,T-LARS,machine-learning,multilinear,sparse,tensor
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Mathematics
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: numpy>=1.21.0
Requires-Dist: pydantic>=2.0
Provides-Extra: progress
Requires-Dist: tqdm>=4.60.0; extra == 'progress'
Provides-Extra: torch
Requires-Dist: torch>=1.9.0; extra == 'torch'
Description-Content-Type: text/markdown

# tensor-ml

[![PyPI version](https://img.shields.io/pypi/v/tensor-ml.svg)](https://pypi.org/project/tensor-ml/)
[![Python](https://img.shields.io/pypi/pyversions/tensor-ml.svg)](https://pypi.org/project/tensor-ml/)
[![Tests](https://github.com/imaduranga/tensor-ml/actions/workflows/tests.yml/badge.svg)](https://github.com/imaduranga/tensor-ml/actions/workflows/tests.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)

A Python library for tensor analysis, multilinear algebra, tensor regression, and multidimensional sparse signal representations.

Implements the **T-LARS** algorithm from:

> Ishan Wickramasingha, Ahmed Elrewainy, Michael Sobhy, Sherif S. Sherif;
> **Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals.**
> *Neural Computation* 2020; 32 (9): 1697–1732.
> doi: [10.1162/neco_a_01304](https://doi.org/10.1162/neco_a_01304)

## Features

- **Backend-agnostic** — unified API for NumPy and PyTorch (auto-detects input type)
- **Tensor products** — Kronecker, Khatri-Rao, Hadamard, full multilinear product
- **T-LARS** — Tensor Least Angle Regression & Selection for sparse tensor recovery
- **scikit-learn-style interface** — `fit` / `predict` / `score` / `get_params` / `set_params`
- **Device management** — `.to('cuda')`, `.cpu()`, `.cuda()` for PyTorch backend
- **Validated configuration** — Pydantic-based parameter validation with clear error messages

## Installation

```bash
pip install tensor-ml
```

With optional PyTorch support:

```bash
pip install "tensor-ml[torch]"
```

Development install from source (using [uv](https://docs.astral.sh/uv/)):

```bash
git clone https://github.com/imaduranga/tensor-ml.git
cd tensor-ml
uv sync --all-extras       # creates .venv and installs all deps
uv run pytest              # run the test suite
```

Or with pip:

```bash
pip install -e ".[torch]"
```

## Quick Start

### Tensor Products

```python
import numpy as np
from tensor_ml import TensorProducts

A = np.array([[1, 2], [3, 4]])
B = np.eye(2)

# Kronecker product
K = TensorProducts.kronecker_product([A, B])

# Full multilinear product: X ×₁ A ×₂ B
X = np.random.randn(2, 2)
Y = TensorProducts.full_multilinear_product(X, [A, B])
```

### T-LARS: Sparse Tensor Recovery

```python
import numpy as np
from tensor_ml import TLARS

# Per-mode dictionaries
D1 = np.random.randn(8, 16)
D2 = np.random.randn(8, 16)

# Target tensor
Y = np.random.randn(8, 8)

# Fit
model = TLARS(tolerance=0.01, l0_mode=True)
model.fit(factor_matrices=[D1, D2], Y=Y)

# Predict & score
Y_hat = model.predict([D1, D2])
r2 = model.score([D1, D2], Y)
print(f"R² = {r2:.4f}, iterations = {model.n_iter_}")
```

### PyTorch Backend

```python
import torch
from tensor_ml import TensorProducts

A = torch.randn(3, 3)
B = torch.randn(3, 3)
K = TensorProducts.kronecker_product([A, B])  # auto-uses TorchTensorProducts
```

## Documentation

| Resource | Description |
|----------|-------------|
| [Quickstart Tutorial](docs/quickstart.ipynb) | Interactive notebook walkthrough |
| [T-LARS Image Reconstruction](docs/examples/tlars_image_reconstruction.ipynb) | Visual demo with DCT dictionaries |
| [API Reference](docs/api_reference.md) | Complete class and method reference |
| [User Guide](docs/user_guide.md) | Concepts, architecture, and extension guide |

## Architecture

```
tensor_ml/
├── enums.py              # BackendType enum
├── exceptions.py         # Custom exception hierarchy
├── utils.py              # Backend inference
├── tensor_ops/
│   ├── tensor_ops.py     # TensorOps ABC + NumpyOps + TorchOps + Factory
│   ├── tensor_products_base.py   # TensorProductsBase ABC
│   ├── tensor_products_numpy.py  # NumPy backend
│   ├── tensor_products_torch.py  # PyTorch backend
│   └── tensor_products.py        # Static facade + Factory
└── tensor_models/
    ├── base.py            # BaseTensorModel ABC
    └── multilinear/
        ├── multilinear_model.py  # MultilinearModel base
        └── tlars.py              # TLARS algorithm + TLARSConfig
```

## Contributing

1. Fork the repository
2. Create a feature branch (`git checkout -b feature/my-feature`)
3. Run the test suite (`uv run pytest`)
4. Submit a pull request against `master`

Releases are fully automated — merging a PR that bumps the version in
`pyproject.toml` and `src/tensor_ml/__init__.py` publishes to PyPI,
creates a git tag, and opens a version-bump PR automatically.

## Citation

If you use tensor-ml in your research, please cite the underlying T-LARS algorithm:

```bibtex
@article{wickramasingha2020tlars,
  title   = {Tensor Least Angle Regression for Sparse Representations of Multidimensional Signals},
  author  = {Wickramasingha, Ishan and Elrewainy, Ahmed and Sobhy, Michael and Sherif, Sherif S.},
  journal = {Neural Computation},
  year    = {2020},
  volume  = {32},
  number  = {9},
  pages   = {1697--1732},
  doi     = {10.1162/neco_a_01304}
}
```

You may also cite the software itself:

```bibtex
@software{tensor_ml,
  title   = {tensor-ml: Tensor Machine Learning Library},
  author  = {Wickramasingha, Ishan},
  url     = {https://github.com/imaduranga/tensor-ml},
  license = {MIT}
}
```

## License

MIT — see [LICENSE](LICENSE) for details.
