Metadata-Version: 2.4
Name: briar
Version: 0.2.0
Summary: Spiking neural network models of V1 development using Brian2
Author: Zubin Kane
License: BSD 3-Clause License
        
        Copyright (c) 2026, Zubin Kane
        
        Redistribution and use in source and binary forms, with or without
        modification, are permitted provided that the following conditions are met:
        
        1. Redistributions of source code must retain the above copyright notice, this
           list of conditions and the following disclaimer.
        
        2. Redistributions in binary form must reproduce the above copyright notice,
           this list of conditions and the following disclaimer in the documentation
           and/or other materials provided with the distribution.
        
        3. Neither the name of the copyright holder nor the names of its
           contributors may be used to endorse or promote products derived from
           this software without specific prior written permission.
        
        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
        AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
        IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
        DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
        FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
        SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
        CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
        OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
        OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
        
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: brian2>=2.10.1
Requires-Dist: numpy<2.4,>=2.0
Requires-Dist: matplotlib>=3.8
Requires-Dist: scipy>=1.12
Requires-Dist: tqdm>=4.60
Requires-Dist: rich>=13.0
Requires-Dist: dill>=0.3.8
Requires-Dist: joblib>=1.3
Requires-Dist: ipykernel>=7
Provides-Extra: cuda
Requires-Dist: brian2cuda; extra == "cuda"
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"
Requires-Dist: pytest-xdist; extra == "dev"
Requires-Dist: nb-execute>=1.0.0; extra == "dev"
Dynamic: license-file

# briar

Large-Scale SNN framework with dendrites, built on Brian2

Research by *Zubin Kane* -- **Makin Lab** at **Purdue University**

## Overview

briar provides pre-built architectures for studying how orientation selectivity and phase diversity emerge in visual cortex through spike-timing-dependent plasticity (STDP). It wraps [Brian2](https://brian2.readthedocs.io/) with a declarative layer for defining neuron pools, synapse pools, and learning rules, then handles device setup, compilation, monitoring, and result serialization automatically.

Key features:
- **Declarative architecture definition** -- define pools and synapses as dataclasses, briar generates the Brian2 equations
- **Built-in architectures** for common V1 models (feedforward, two-layer simple/complex, efficient coding)
- **Custom architecture** for building networks from scratch
- **cpp_standalone and CUDA support** with incremental compilation for fast repeated runs
- **Parameter sweeps** with automatic grid search
- **Rich result objects** with summary dashboards, diffs, and architecture-aware plotting

## Installation

```bash
pip install briar
```

## Quick Start

The simplest way to run an experiment is to pick an architecture and a task, then call `buildrun()`:

```python
from briar import SimpleComplex, NaturalImageTask

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)
results = arch.buildrun()
```

This builds the full Brian2 network, runs the simulation, and returns a `Results` object containing all parameters, spike monitors, weight histories, and connectivity.

### Inspecting results

```python
# In Jupyter, just evaluate `results` to see the rich dashboard
results

# In a script, print() gives the same dashboard in plain text
print(results)

# results.summary() also prints the dashboard (same as print)
results.summary()

# Show only parameters that differ from defaults
results.diff()

# Architecture-aware default plots
results.plot()
results.plot(full=True)    # additional diagnostics
results.plot(debug=True)   # low-level debug panels
```

### Modifying parameters

Every architecture parameter can be overridden at construction. The `diff()` method shows exactly what changed:

```python
arch = SimpleComplex(
    task,
    eta_ff_simple=1e-3,       # increase simple cell learning rate
    ff_complex_radius=6.0,    # widen complex cell receptive fields
)
results = arch.buildrun()

# diff() highlights only the non-default values
results.diff()
```

### Adding pools to an existing architecture

Any architecture can be extended with additional pools after construction. Added pools are automatically discovered and built:

```python
from briar import SimpleComplex, NaturalImageTask, NeuronPool, SynapsePool

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)

# Add a new neuron pool and connect it
arch.add(NeuronPool(name='readout', n_neurons=8))
arch.add(SynapsePool(
    name='ff_readout',
    source=arch.simple_layer,
    target=arch.readout,
))

results = arch.buildrun()
```

## Device Modes

briar supports four device modes:

```python
from briar import SimpleComplex, NaturalImageTask, SimConfig

task = NaturalImageTask(n_patterns=500, image_size=32)

# C++ standalone (default) -- compiled to native code
arch = SimpleComplex(task)
results = arch.buildrun()

# Runtime -- interpreted, no compilation, useful for quick tests
sim = SimConfig(use_cpp=False)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()

# CUDA standalone -- compiled to GPU code (requires brian2cuda on Linux)
sim = SimConfig(use_cuda=True)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()

# CUDA without reduce -- skip .cu file combining (for debugging)
sim = SimConfig(use_cuda=True, cuda_reduce=False)
arch = SimpleComplex(task, sim=sim)
results = arch.buildrun()
```

C++ and CUDA modes use incremental compilation -- the first build is slow, but subsequent runs reuse the compiled binary and finish in seconds.

## Architectures

### Simple

Feedforward-only: LGN ON/OFF inputs connect to simple cells via STDP. Tests whether feedforward learning alone can produce orientation selectivity with phase diversity.

```
input_pool (LGN ON/OFF) -> simple_layer (STDP)
```

```python
from briar import Simple, RetinalWaveTask

task = RetinalWaveTask(n_waves=200, image_size=32)
arch = Simple(task, eta_ff_simple=5e-4)
results = arch.buildrun()
```

### SimpleComplex

Two-layer architecture inspired by Antolik & Bednar (2011) with dendritic predictive coding from Mikulasch et al. (2021). Simple cells learn from LGN input via somatic STDP; complex cells learn from simple cells via dendritic predictive coding. Includes Mexican hat recurrent connections and feedback.

```
input_pool (LGN ON/OFF) -> simple_layer (somatic, STDP)
simple_layer -> complex_layer (dendritic, predictive coding)
complex_layer -> complex_layer (Mexican hat recurrent)
complex_layer -> simple_layer (Mexican hat feedback, fixed)
```

```python
from briar import SimpleComplex, NaturalImageTask

task = NaturalImageTask(n_patterns=500, image_size=32)
arch = SimpleComplex(task)
results = arch.buildrun()
```

### EfficientEncoder

Single-layer efficient coding model with feedforward predictive coding, Hebbian recurrent connections, and a decoder for reconstruction loss monitoring.

```
input_pool -> layer1 (dendritic, predictive coding)
layer1 -> layer1 (dendritic, Hebbian recurrent)
layer1 -> decoder (reconstruction loss)
```

```python
from briar import EfficientEncoder, BarTask

task = BarTask(image_size=8, n_patterns=1000)
arch = EfficientEncoder(task)
results = arch.buildrun()
```

### Custom

A blank architecture with no pre-defined pools. Build any network from scratch by adding pools and wiring them manually:

```python
from briar import (
    Custom, NeuronPool, SynapsePool, PoissonPool,
    DecoderPool, SimConfig, PlasticityRule,
)
from briar.tasks import BarTask
from briar.datastructures import Compartment, DendriticRule

task = BarTask(image_size=8, n_patterns=500)
arch = Custom(task)

# Input layer
arch.add(PoissonPool(name='input', task=task))

# Hidden layer
arch.add(NeuronPool(name='hidden', n_neurons=16))

# Feedforward synapses with STDP
arch.add(SynapsePool(
    name='ff',
    source=arch.input,
    target=arch.hidden,
    eta=5e-4,
    plasticity_rule=PlasticityRule.STDP,
))

# Spike monitor
arch.hidden.add_monitor('spikes')

results = arch.buildrun()
```

## Parameter Sweeps

Run multiple experiments varying one or more parameters:

```python
from briar import SimpleComplex, NaturalImageTask, sweep

task = NaturalImageTask(n_patterns=500)

# Sweep a single parameter
results = sweep(
    SimpleComplex, task, use_cpp=False,
    eta_ff_simple=[5e-5, 5e-4, 5e-3, 5e-2, 5e-1],
)

# Mix fixed overrides with swept parameters
results = sweep(
    SimpleComplex, task,
    ff_complex_radius=6.0,                          # scalar -> fixed
    eta_ff_simple=[5e-5, 5e-4, 5e-3, 5e-2, 5e-1],  # list -> swept
)

# Multi-parameter grid (Cartesian product)
results = sweep(
    SimpleComplex, task,
    eta_ff_simple=[1e-4, 1e-3],
    eta_ff_complex=[5e-5, 5e-4],
)  # 4 runs total
```

## Comparison Plots

Compare results from a sweep (or manually loaded pickles) side-by-side:

```python
from briar import plot_raster_comparison, plot_rate_ridgeline, Results

# Stacked raster plots
plot_raster_comparison(results, layer='simple')
plot_raster_comparison(results, layer='complex')

# Ridgeline firing rate distributions
plot_rate_ridgeline(results, layer='simple')
plot_rate_ridgeline(results, layer='complex')

# Works with manually loaded results too
r1 = Results.load('dumps/SimpleComplex/NaturalImageTask/20260304_091654.pkl')
r2 = Results.load('dumps/SimpleComplex/NaturalImageTask/20260304_131307.pkl')
plot_raster_comparison([r1, r2], labels=['eta=1e-4', 'eta=1e-3'])
```

## Testing

```bash
# All tests
pytest

# Fast tests only - runtime mode, fastest in sequential (not parallel)
pytest -m "not slow" --override-ini="addopts="

# Slow tests only - cpp_standalone compile+run
pytest -m slow

# Slow test *examples* for a specific architecture
pytest -m encoder
pytest -m "slow and simple_complex"
```
