Metadata-Version: 2.4
Name: openretina
Version: 1.1.0
Summary: Open source retina model architectures and training setups
Author: Federico D'Agostino, Thomas Zenkel, Larissa Höfling
Project-URL: Homepage, https://github.com/open-retina/open-retina
Project-URL: Issues, https://github.com/open-retina/open-retina/issues
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: IPython
Requires-Dist: jaxtyping
Requires-Dist: matplotlib
Requires-Dist: numpy
Requires-Dist: pandas
Requires-Dist: standard-imghdr
Requires-Dist: seaborn
Requires-Dist: scipy
Requires-Dist: tenacity
Requires-Dist: tensorboard
Requires-Dist: torch
Requires-Dist: torchaudio
Requires-Dist: torchvision
Requires-Dist: einops
Requires-Dist: lightning
Requires-Dist: wandb
Requires-Dist: hydra-core
Requires-Dist: omegaconf
Requires-Dist: h5py
Requires-Dist: opencv-python-headless
Requires-Dist: imageio>=2.36
Requires-Dist: moviepy>=2.0.0
Requires-Dist: huggingface_hub>=0.25
Requires-Dist: jupyter
Requires-Dist: ipywidgets
Requires-Dist: mlflow
Provides-Extra: dev
Requires-Dist: ruff>=0.9; extra == "dev"
Requires-Dist: mypy>=1.0; extra == "dev"
Requires-Dist: pandas-stubs; extra == "dev"
Requires-Dist: pytest; extra == "dev"
Requires-Dist: types-psutil; extra == "dev"
Requires-Dist: types-tqdm; extra == "dev"
Requires-Dist: types-PyYAML; extra == "dev"
Requires-Dist: types-requests; extra == "dev"
Requires-Dist: nbmake; extra == "dev"
Requires-Dist: jupyterlab; extra == "dev"
Provides-Extra: devmodels
Requires-Dist: neuralpredictors; extra == "devmodels"
Provides-Extra: optuna
Requires-Dist: hydra-optuna-sweeper; extra == "optuna"
Dynamic: license-file

# OpenRetina <img src="https://raw.githubusercontent.com/open-retina/open-retina/7aacfa64267930f787b16f24e4bc17047f285c25/assets/openretina_logo.png" align="right" width="120" />

[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![mypy](https://img.shields.io/badge/type%20checked-mypy-039dfc)](https://github.com/python/mypy)
[![pytorch](https://img.shields.io/badge/PyTorch_2.0+-ee4c2c?logo=pytorch\&logoColor=white)](https://pytorch.org/get-started/locally/)
[![lightning](https://img.shields.io/badge/-Lightning_2.0+-792ee5?logo=pytorchlightning\&logoColor=white)](https://pytorchlightning.ai/)
[![hydra](https://img.shields.io/badge/Config-Hydra_1.3-89b8cd)](https://hydra.cc/)
[![DOI](https://zenodo.org/badge/722208169.svg)](https://doi.org/10.5281/zenodo.14988814)

[![huggingface](https://huggingface.co/datasets/huggingface/badges/resolve/main/dataset-on-hf-sm.svg)](https://huggingface.co/datasets/open-retina/open-retina)

Open-source repository containing neural network models of the retina.
The models in this repository are inspired by and contain adapted code of [sinzlab/neuralpredictors](https://github.com/sinzlab/neuralpredictors). Accompanying preprint: [openretina: Collaborative Retina Modelling Across Datasets and Species](https://www.biorxiv.org/content/10.1101/2025.03.07.642012v1).

## Installation

`openretina` supports installation via pip.

```bash
# (Recommended) using a package manager like uv
uv pip install openretina

# Or directly via pip if you prefer
pip install openretina
```

If you want to train your own models, run Jupyter notebooks, contribute to the project, or modify the source code of `openretina`, we recommend to install from source.
Consider using `uv`, a fast and flexible project and package manager. If you are not familiar with uv, check out their [simple quickstart guide](https://docs.astral.sh/uv/).

```bash
git clone git@github.com:open-retina/open-retina.git
cd open-retina

# Sync with uv
uv sync --extra dev

# Alternatively, install in editable mode via pip. 
pip install -e .[dev]
```

Test openretina by downloading a model and running a forward pass:

```python
import torch
from openretina.models import *

model = load_core_readout_from_remote("hoefling_2024_low_res", "cpu")
responses = model.forward(torch.rand(model.stimulus_shape(time_steps=50)))
```

## Contributing

Before raising a PR please run:

```bash
# Fix formatting of python files
make fix-formatting

# Run type checks and unit tests
make test-all
```

## Design decisions and structure

With this repository we provide already pre-trained retina models that can be used for inference and intepretability out of the box, and dataloaders together with model architectures to train new models.
For training new models, we rely on [pytorch lightning](https://lightning.ai/docs/pytorch/stable/) in combination with [hydra](https://hydra.cc/docs/intro/) to manage the configurations for training and dataloading.

The openretina package is structured as follows:

* modules: pytorch modules that define layers and losses
* models: pytorch lightning models that define models that can be trained and evaluated (i.e. models from specific papers)
* data\_io: dataloaders to manage access of data to be used for training
* insilico: Methods perform *insilico* experiments with above models
  * stimulus\_optimization: optimize inputs for neurons of above models according to interpretable objectives (e.g. most exciting inputs)
  * future options: gradient analysis, data analysis
* utils: Utility functions that are used across above submodules

## Related papers and data sources

* hoefling\_2024: Originally published by Höfling et al. (2024), eLife
  * Paper: [A chromatic feature detector in the retina signals visual context changes](https://doi.org/10.7554/eLife.86860).
  * Dataset originally deposited at: https://gin.g-node.org/eulerlab/rgc-natstim
* karamanlis\_2024: Originally published by Karamanlis et al. (2024), Nature
  * Paper: [Nonlinear receptive fields evoke redundant retinal coding of natural scenes](https://doi.org/10.1038/s41586-024-08212-3)
  * Dataset: Karamanlis D, Gollisch T (2023) Dataset - Marmoset and mouse retinal ganglion cell responses to natural stimuli and supporting data. G-Node. https://doi.org/10.12751/g-node.ejk8kx
* maheswaranathan\_2023: Originally published by Maheswaranathan et al. (2023), Neuron
  * Paper: [Interpreting the retinal neural code for natural scenes: From computations to neurons](https://doi.org/10.1016/j.neuron.2023.06.007)
  * Dataset: Maheswaranathan, N., McIntosh, L., Tanaka, H., Grant, S., Kastner, D., Melander, J., Nayebi, A., Brezovec, L., Wang, J. Ganguli, S. Baccus, S. (2023). Interpreting the retinal neural code for natural scenes: from computations to neurons. Stanford Digital Repository. Available at https://purl.stanford.edu/rk663dm5577
* goldin\_2022: Originally published by Goldin et al. (2022), Nature Communications
  * Paper: [Context-dependent selectivity to natural images in the retina](https://doi.org/10.1038/s41467-022-33242-8)
  * Dataset originally deposited at: https://zenodo.org/records/6868362
* sridhar\_2025: Originally published by Sridhar et al. (2025), biorxiv
  * Paper: [Modeling spatial contrast sensitivity in responses of primate retinal ganglion cells to natural movies](https://www.biorxiv.org/content/10.1101/2024.03.05.583449v1)
  * Dataset: Sridhar S, Gollisch T (2025) Dataset -  Marmoset retinal ganglion cell responses to naturalistic movies and spatiotemporal white noise. G-Node. 
https://doi.gin.g-node.org/10.12751/g-node.3dfiti/
  * Models: Models trained on this dataset were developed as part of [A systematic comparison of predictive models on the retina](https://www.biorxiv.org/content/10.1101/2024.03.06.583740v2)
  

The paper [Most discriminative stimuli for functional cell type clustering](https://openreview.net/forum?id=9W6KaAcYlr) explains the discriminatory stimulus objective we showcase in [notebooks/most\_discriminative\_stimulus](https://github.com/open-retina/open-retina/blob/main/notebooks/most_discriminative_stimulus.ipynb).
