Metadata-Version: 2.4
Name: gradnet
Version: 0.4.0
Summary: Trainable graph adjacency parameterizations with ODE integration and Lightning training helpers.
Project-URL: Homepage, https://mikaberidze.github.io/
Project-URL: Documentation, https://gradnet.readthedocs.io/
Project-URL: Repository, https://github.com/mikaberidze/gradnet
Author: Guram Mikaberidze, Beso Mikaberidze, Dane Taylor
License-Expression: BSD-3-Clause
License-File: LICENSE
Keywords: dynamical systems,graphs,lightning,networks,ode,optimization,pytorch
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Mathematics
Requires-Python: >=3.10
Requires-Dist: matplotlib
Requires-Dist: numpy<2,>=1.21
Requires-Dist: pytorch-lightning<3,>=1.9
Requires-Dist: tensorboard
Requires-Dist: torch>=2.0
Requires-Dist: torchdiffeq>=0.2.3
Requires-Dist: tqdm>=4.64
Provides-Extra: examples
Requires-Dist: cmcrameri; extra == 'examples'
Requires-Dist: imageio-ffmpeg>=0.4; extra == 'examples'
Requires-Dist: networkx>=2.6; extra == 'examples'
Provides-Extra: networkx
Requires-Dist: networkx>=2.6; extra == 'networkx'
Description-Content-Type: text/markdown

# GradNet

GradNet is a PyTorch-based framework for AI-enabled optimization of networks. Define static or dynamical objectives and constraints, then discover the optimal network structures.

It encodes the network structure as a differentiable object with optional budget and structure constraints. It lets the users directly optimize static objectives using a lightweight PyTorch Lightning training loop. Alternatively, built-in ODE solvers can be used to define and optimize dynamical objectives.

<p align="center">
  <img src="docs/source/_static/gradient_descent.png"
       alt="Illustration of the gradient-based optimization pipeline for network structure."
       width="650" />
  <br />
  <em>Illustration of the gradient-based optimization pipeline for network structure.</em>
</p>

<p align="center">
  <img src="docs/source/_static/rewiring_net.gif"
       alt="A random network rewires itself with GradNet to optimize synchronization in the Kuramoto model."
       width="300" />
  <br />
  <em>A random network rewires itself with GradNet to optimize synchronization in the Kuramoto model.</em>
</p>

## Highlights

- Learn dense or sparse adjacency updates with norm, sign, and symmetry constraints.
- Projected parameterizations that stay differentiable and GPU friendly.
- Torchdiffeq-backed integration utilities for graph-driven dynamical systems.
- Minimal Lightning trainer that wraps loss functions in just a few lines.

## Installation

Install the released package from PyPI:

```bash
pip install gradnet
```

To work off the latest sources instead, clone the repository and install in editable mode:

```bash
pip install -e .
```

GradNet targets Python 3.10+ and requires pip 21.3+ (run `pip install --upgrade pip` if needed). It depends on PyTorch, PyTorch Lightning, torchdiffeq, NumPy, and tqdm (installed automatically by the command above). Install the optional NetworkX helpers with `pip install 'gradnet[networkx]'` when you need conversions to `networkx` graphs or plotting utilities that rely on it.

## Documentation

Full API documentation, tutorials, and background material live at [gradnet.readthedocs.io](https://gradnet.readthedocs.io/).

## Quickstart

The examples folder contains several examples of how to use GradNet.

### [Spectral optimization (algebraic connectivity)](examples/1_algebraic_connectivity.ipynb)
Demonstrates a simple example of Configuring a `GradNet` object restricted to a grid lattice.
It defines a simple static loss function (the algebraic connectivity). 
Then it uses `fit` to optimize the network structure, all in the first code cell of the notebook.
The rest of the notebook is analysis of the optimal grid and comparison of dense and sparse backends.

### [Kuramoto network optimization](examples/2_kuramoto.ipynb)
A simple example of dynamical loss and usage of `integrate_ode`.
Demonstrates structural optimization emergent sparsity with no mask.

### [Zachary's karate club](examples/3_karate_club.ipynb)
An xample showing how to optimally modify existing networks.


## Modules at a glance

- `gradnet.GradNet`: wraps dense and sparse parameterizations, supports directed/undirected networks, masking, custom edge-building costs etc.
- `gradnet.integrate_ode`: torchdiffeq-powered solver with adjoint and event support for adjacency-dependent dynamics.
- `gradnet.fit`: PyTorch Lightning loop that optimizes a `GradNet` using user-supplied loss functions.
- `gradnet.utils`: various helpers functions.

## Credits

GradNet relies on (and is inspired by) the following open-source projects:

- [PyTorch](https://pytorch.org/)
- [PyTorch Lightning](https://lightning.ai/)
- [torchdiffeq](https://github.com/rtqichen/torchdiffeq)


## License

GradNet is released under the BSD 3-Clause License. See `LICENSE` for details.
