Metadata-Version: 2.4
Name: ptnl
Version: 0.1.0a0
Summary: PyTorch-native nonlinear optimization toolbox
Author: Matthias Lenga
License: Proprietary
Requires-Python: >=3.10
Requires-Dist: matplotlib>=3.9
Requires-Dist: numpy>=2.0
Requires-Dist: seaborn>=0.13
Requires-Dist: torch>=2.11
Provides-Extra: dev
Requires-Dist: pytest>=8.2; extra == 'dev'
Description-Content-Type: text/markdown

<img width="2542" height="859" style="width: 50%; height: auto;" alt="ptnl_logo" src="https://github.com/user-attachments/assets/5ea171d9-d76f-457a-9b0a-bc7c2d1c1a33" /></br>



PTNL is a PyTorch-native library for nonlinear optimization, with a current focus on dense nonlinear least squares and constrained nonlinear programs.

It is aimed at researchers and data scientists who want solver logic, diagnostics, and differentiation behavior to remain visible in ordinary PyTorch workflows rather than disappearing behind a black-box wrapper.


## Current Scope

PTNL currently includes:

- dense nonlinear least-squares problems
- constrained nonlinear programs with equality constraints, inequality constraints, and bounds
- Gauss-Newton, Levenberg-Marquardt, trust-region, SQP, and interior-point solver paths
- explicit solver diagnostics, iteration history, and reproducibility metadata
- shared-structure batching for repeated least-squares solves
- explicit unrolled and conservative implicit differentiation modes
- CPU and CUDA benchmark harnesses and example scripts

## Install

Create the development environment with `uv`:

```powershell
uv sync
```

This installs the default `dev` group, including `pytest`, and resolves `torch` from the configured PyTorch wheel index.

If you need an editable install on top of the synced environment:

```powershell
uv pip install -e . --no-deps
```

Run the tests:

```powershell
uv run --group dev python -m pytest
```

## Basic Use

```python
from pytorch_nonlinear import NonlinearLeastSquaresProblem, SolverConfig, solve


def residual(state, params):
    x = params["x"]
    y = params["y"]
    prediction = state[0] * torch.exp(-state[1] * x)
    return prediction - y


problem = NonlinearLeastSquaresProblem(residual=residual)
result = solve(
    problem,
    x0=torch.tensor([1.0, 0.1], dtype=torch.float64),
    params={"x": x_data, "y": y_data},
    config=SolverConfig(method="lm"),
)

print(result.x)
print(result.objective_value)
print(result.gradient_norm)
```

If `device` is not specified, PTNL follows the device placement of the input tensors.

## Common Patterns

Choose a device explicitly:

```python
result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", device="cuda"),
)
```

Run the trust-region least-squares solver:

```python
result = solve(problem, x0=x0, params=params, config=SolverConfig(method="trust_region"))
print(result.history[-1].trust_region_radius)
print(result.history[-1].trust_region_ratio)
```

Run a shared-structure batch of least-squares solves:

```python
from pytorch_nonlinear import BatchMode

batch_result = solve(
    problem,
    x0=x0_batch,
    params=params_batch,
    config=SolverConfig(method="lm", batch_mode=BatchMode.SHARED_STRUCTURE),
)

print(batch_result.summary())
print(batch_result.results[0].summary())
```

Enable automatic scaling:

```python
from pytorch_nonlinear import ScalingConfig, ScalingMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(
        method="trust_region",
        scaling=ScalingConfig(variable_mode=ScalingMode.AUTO, residual_mode=ScalingMode.AUTO),
    ),
)

print(result.diagnosis)
print(result.reproducibility["scaling"])
```

Differentiate through a solve with unrolling:

```python
from pytorch_nonlinear import DiffMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", diff_mode=DiffMode.UNROLL),
)

outer_loss = result.x.square().sum()
outer_loss.backward()
print(result.diff_mode_used, result.diff_valid)
```

Use the conservative implicit differentiation path:

```python
from pytorch_nonlinear import DiffMode

result = solve(
    problem,
    x0=x0,
    params=params,
    config=SolverConfig(method="lm", diff_mode=DiffMode.IMPLICIT),
)

print(result.diff_mode_used, result.diff_valid)
print(result.diff_condition_estimate, result.diff_linear_residual)
```

Implicit differentiation is attached only when PTNL can certify that the returned point is safe for that path.

## Benchmarks And Examples

Run the least-squares benchmark harness:

```powershell
python benchmarks/run_benchmarks.py
```

Run the constrained benchmark harness:

```powershell
python benchmarks/run_constrained_benchmarks.py --method sqp
```

Useful example scripts include:

- `python examples/least_squares_curve_fit.py`
- `python examples/least_squares_scaling_effect.py`
- `python examples/rosenbrock_gn_vs_lm.py`
- `python examples/cuda_rosenbrock_gn_lm_tr.py`
- `python examples/cuda_robust_loss_comparison_hard.py`
- `python examples/learned_range_sensor_fusion.py`
- `python examples/cpu_vs_gpu_gn_lm.py`
