Metadata-Version: 2.4
Name: spikelab
Version: 0.1.0
Summary: Python library for agentic spike train analysis of neural electrophysiology data
Author: Tjitse van der Molen, Braingeneers, SpikeLab Maintainers
License: MIT
Project-URL: Homepage, https://github.com/braingeneers/SpikeLab
Project-URL: Documentation, https://spikelab.braingeneers.gi.ucsc.edu/
Project-URL: Repository, https://github.com/braingeneers/SpikeLab
Project-URL: Issues, https://github.com/braingeneers/SpikeLab/issues
Keywords: neuroscience,electrophysiology,spike-train,multi-electrode-array,MEA,spike-sorting,neural-data,kilosort,MCP
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.20
Requires-Dist: scipy>=1.5
Requires-Dist: matplotlib>=3.5
Requires-Dist: h5py>=3.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: black==26.3.1; extra == "dev"
Provides-Extra: docs
Requires-Dist: sphinx>=7.0; extra == "docs"
Requires-Dist: sphinx-rtd-theme>=2.0; extra == "docs"
Requires-Dist: sphinx-autodoc-typehints>=1.25; extra == "docs"
Provides-Extra: io
Requires-Dist: pandas>=1.3; extra == "io"
Provides-Extra: mcp
Requires-Dist: mcp>=0.9.0; extra == "mcp"
Provides-Extra: sse
Requires-Dist: uvicorn>=0.20; extra == "sse"
Requires-Dist: starlette>=0.27; extra == "sse"
Provides-Extra: s3
Requires-Dist: boto3>=1.28.0; extra == "s3"
Provides-Extra: gplvm
Requires-Dist: jax>=0.4.26; extra == "gplvm"
Requires-Dist: jaxlib>=0.4.26; extra == "gplvm"
Requires-Dist: jaxopt>=0.8.2; extra == "gplvm"
Requires-Dist: optax>=0.2.2; extra == "gplvm"
Provides-Extra: ml
Requires-Dist: scikit-learn>=1.0; extra == "ml"
Requires-Dist: umap-learn>=0.5.0; extra == "ml"
Requires-Dist: networkx>=2.6; extra == "ml"
Requires-Dist: python-louvain>=0.16; extra == "ml"
Provides-Extra: neo
Requires-Dist: neo>=0.12.0; extra == "neo"
Requires-Dist: quantities>=0.14.0; extra == "neo"
Requires-Dist: pynwb>=2.0.0; extra == "neo"
Provides-Extra: numba
Requires-Dist: numba>=0.56; extra == "numba"
Provides-Extra: spike-sorting
Requires-Dist: spikeinterface>=0.104.0; extra == "spike-sorting"
Requires-Dist: natsort>=8.0; extra == "spike-sorting"
Requires-Dist: six>=1.16; extra == "spike-sorting"
Requires-Dist: pandas>=1.3; extra == "spike-sorting"
Provides-Extra: batch-jobs
Requires-Dist: pydantic>=2.7.0; extra == "batch-jobs"
Requires-Dist: PyYAML>=6.0.1; extra == "batch-jobs"
Requires-Dist: Jinja2>=3.1.0; extra == "batch-jobs"
Requires-Dist: kubernetes>=30.1.0; extra == "batch-jobs"
Provides-Extra: kilosort4
Requires-Dist: kilosort>=4.0; extra == "kilosort4"
Provides-Extra: all
Requires-Dist: pandas>=1.3; extra == "all"
Requires-Dist: mcp>=0.9.0; extra == "all"
Requires-Dist: boto3>=1.28.0; extra == "all"
Requires-Dist: uvicorn>=0.20; extra == "all"
Requires-Dist: starlette>=0.27; extra == "all"
Requires-Dist: jax>=0.4.26; extra == "all"
Requires-Dist: jaxlib>=0.4.26; extra == "all"
Requires-Dist: jaxopt>=0.8.2; extra == "all"
Requires-Dist: optax>=0.2.2; extra == "all"
Requires-Dist: scikit-learn>=1.0; extra == "all"
Requires-Dist: umap-learn>=0.5.0; extra == "all"
Requires-Dist: networkx>=2.6; extra == "all"
Requires-Dist: python-louvain>=0.16; extra == "all"
Requires-Dist: neo>=0.12.0; extra == "all"
Requires-Dist: quantities>=0.14.0; extra == "all"
Requires-Dist: pynwb>=2.0.0; extra == "all"
Requires-Dist: numba>=0.56; extra == "all"
Requires-Dist: spikeinterface>=0.104.0; extra == "all"
Requires-Dist: natsort>=8.0; extra == "all"
Requires-Dist: six>=1.16; extra == "all"
Requires-Dist: pydantic>=2.7.0; extra == "all"
Requires-Dist: PyYAML>=6.0.1; extra == "all"
Requires-Dist: Jinja2>=3.1.0; extra == "all"
Requires-Dist: kubernetes>=30.1.0; extra == "all"
Dynamic: license-file

# SpikeLab

[![Tests](https://github.com/braingeneers/SpikeLab/actions/workflows/tests.yml/badge.svg?branch=main)](https://github.com/braingeneers/SpikeLab/actions/workflows/tests.yml?query=branch%3Amain) [![Black Formatting](https://github.com/braingeneers/SpikeLab/actions/workflows/black.yml/badge.svg)](https://github.com/braingeneers/SpikeLab/actions/workflows/black.yml)

SpikeLab is a Python library for loading, analyzing, visualizing, and exporting neuronal spike train data from multi-electrode array (MEA) electrophysiology experiments.

📖 **Documentation:** [spikelab.braingeneers.gi.ucsc.edu](https://spikelab.braingeneers.gi.ucsc.edu/)

## What SpikeLab can do

- **Load data** from common neuroscience formats (HDF5, NWB, KiloSort/Phy, SpikeInterface)
- **Represent spike trains** as `SpikeData` objects with per-unit spike times in milliseconds
- **Compute firing rates** as `RateData` objects (instantaneous firing rates binned over time)
- **Slice around events** to create `SpikeSliceStack` or `RateSliceStack` objects for event-aligned analysis
- **Conduct analyses** at the single unit, pairwise and population level
- **Export data** to KiloSort, NWB, and other formats
- **Store and organize results** using the `AnalysisWorkspace` for multi-stage analysis projects
- **Access programmatically** via a built-in MCP server for tool-based workflows
- **Run spike sorting** on electrophysiology recordings with built-in pipelines for Kilosort2, Kilosort4, and rt-sort (`spikelab.spike_sorting`)
- **Submit batch jobs** to remote Kubernetes clusters for compute-heavy workloads via `spikelab.batch_jobs`

## Installation

### Prerequisites

You need **Python 3.10 or later**. If you don't have Python installed, we recommend installing it via [Miniconda](https://docs.anaconda.com/miniconda/).

### Option 1: pip install (recommended)

```bash
pip install spikelab
```

This installs SpikeLab and its core dependencies (numpy, scipy, matplotlib, h5py).

### Option 2: conda environment

If you prefer a conda environment with all dependencies pre-configured:

```bash
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
conda env create -f environment.yml
conda activate spikelab
pip install spikelab
```

### Option 3: install from source

For development, clone the repository and install in editable mode:

```bash
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
pip install -e .
```

### Verify the installation

Open a Python prompt and run:

```python
from spikelab import SpikeData
print("SpikeLab is installed correctly!")
```

If you see the success message, you're ready to go.

### Optional dependencies

Some features require additional packages that are not installed by default. Install them by appending the extra in brackets:

```bash
pip install "spikelab[s3]"
pip install "spikelab[s3,ml,mcp]"   # multiple extras
pip install "spikelab[all]"         # everything except kilosort4
```

| Extra | Install command | What it enables |
|---|---|---|
| `mcp` | `pip install "spikelab[mcp]"` | Built-in MCP server for tool-based workflows |
| `sse` | `pip install "spikelab[sse]"` | SSE transport for the MCP server (uvicorn + starlette) |
| `s3` | `pip install "spikelab[s3]"` | Upload/download data from Amazon S3 (or any S3-compatible store) |
| `io` | `pip install "spikelab[io]"` | Extra I/O helpers (pandas) |
| `ml` | `pip install "spikelab[ml]"` | scikit-learn, UMAP, networkx, python-louvain |
| `neo` | `pip install "spikelab[neo]"` | NWB / neo / quantities for reading NWB files |
| `gplvm` | `pip install "spikelab[gplvm]"` | Gaussian Process Latent Variable Model fitting |
| `numba` | `pip install "spikelab[numba]"` | Numba-accelerated routines |
| `spike-sorting` | `pip install "spikelab[spike-sorting]"` (+ MATLAB for Kilosort2) | Kilosort2 / rt-sort pipelines via `spikelab.spike_sorting` |
| `kilosort4` | `pip install "spikelab[kilosort4]"` (+ PyTorch with CUDA, [installed separately](https://pytorch.org/get-started/locally/)) | Kilosort4 pipeline |
| `batch-jobs` | `pip install "spikelab[batch-jobs]"` | Submit jobs to remote Kubernetes clusters (`spikelab-batch-jobs` CLI) |
| `docs` | `pip install "spikelab[docs]"` | Sphinx + theme + autodoc-typehints for building the docs |
| `dev` | `pip install "spikelab[dev]"` | pytest, black, and other dev utilities |
| `all` | `pip install "spikelab[all]"` | All of the above except `kilosort4` |

When installing from a local source checkout, replace `spikelab` with `-e .` (e.g. `pip install -e ".[s3]"`).

## Quick start

```python
from spikelab import SpikeData
from spikelab.data_loaders import load_spikedata_from_nwb

# Load spike data from an NWB file
sd = load_spikedata_from_nwb("recording.nwb")

# Basic properties
print(f"Units: {sd.N}")
print(f"Duration: {sd.length} ms")

# Compute instantaneous firing rates (100 ms bins)
rates = sd.rates(bin_size=100.0)

# Get a binary spike raster (1 ms bins)
raster = sd.raster(bin_size_ms=1.0)

# Compute pairwise spike time tiling coefficients
sttc_matrix = sd.spike_time_tilings(delt=20.0)

# Export to KiloSort format
sd.to_kilosort("ks_output/", fs_Hz=20000.0)
```

## Key concepts

- **All spike times are in milliseconds** throughout the library.
- **`SpikeData`** holds per-unit spike times and is the starting point for all analyses.
- **`RateData`** holds binned instantaneous firing rates with shape `(units, time_bins)`.
- **`SpikeSliceStack` / `RateSliceStack`** hold event-aligned slices for comparative analysis.
- **`PairwiseCompMatrix`** holds an N x N comparison matrix (e.g., STTC between unit pairs).
- **`AnalysisWorkspace`** stores intermediate results across multi-stage analysis pipelines.

## AI-assisted analysis

SpikeLab includes a set of built-in skills that guide your CLI agent through data analysis, spike sorting, library development, and education — all through natural language conversation.

### How it works

The skills ship inside the installed package at `spikelab/agent/skills/`. A lightweight **`spikelab` router skill** (installed separately into your agent's skills directory) handles environment detection (conda vs. system Python, install if missing) and then delegates to the in-repo skill that best matches the user's request:

| In-repo skill | Use when the user wants to… |
|---|---|
| **`spikelab-analysis-implementer`** | Load data, write/run analysis scripts, generate publication-quality figures, manage results, and keep repo maps current |
| **`spikelab-spikesorter`** | Sort raw recordings (Kilosort2, Kilosort4, RT-Sort), curate units, run stim-aligned sorting, and inspect sorting outputs |
| **`spikelab-developer`** | Promote ad-hoc analysis code into the library — identify reusable methods, integrate novel computations, write tests, expose via MCP, and submit a PR |
| **`spikelab-educator`** | Explain what an analysis does, how a method works, or what a result means — read-only, no code execution |
| **`spikelab-map-updater`** | Regenerate the repo map files after library changes |

CLI agents that load skills from installed packages pick the in-repo skills up automatically; alternatively, copy or symlink them into the agent's skills directory. As an alternative to the skills, MCP tools are available for all methods in the library.

## Directory structure

```
SpikeLab/
├── src/
│   └── spikelab/           # Installable Python package
│       ├── spikedata/          # Core data structures and analysis
│       │   ├── spikedata.py        # SpikeData class
│       │   ├── ratedata.py         # RateData class
│       │   ├── spikeslicestack.py  # SpikeSliceStack class
│       │   ├── rateslicestack.py   # RateSliceStack class
│       │   ├── pairwise.py         # PairwiseCompMatrix and PairwiseCompMatrixStack
│       │   ├── utils.py            # Shared utility functions
│       │   └── plot_utils.py       # Visualization helpers
│       ├── data_loaders/       # File I/O
│       │   ├── data_loaders.py     # Load from HDF5, NWB, KiloSort, SpikeInterface
│       │   ├── data_exporters.py   # Export to KiloSort, NWB, and other formats
│       │   └── s3_utils.py         # Amazon S3 upload/download utilities
│       ├── spike_sorting/      # Spike-sorting pipelines
│       │   ├── pipeline.py         # Top-level sorting pipeline + config
│       │   ├── ks2_runner.py       # Kilosort2 runner (requires MATLAB)
│       │   ├── ks4_runner.py       # Kilosort4 runner (PyTorch / CUDA)
│       │   ├── rt_sort/            # rt-sort runner
│       │   └── stim_sorting/       # Stimulation-aware sorting helpers
│       ├── workspace/          # Analysis workspace for storing intermediate results
│       │   ├── workspace.py        # AnalysisWorkspace class
│       │   └── hdf5_io.py          # HDF5 serialization for workspace objects
│       ├── mcp_server/         # MCP protocol server for programmatic access
│       │   ├── server.py           # MCP server implementation
│       │   └── tools/              # MCP tool definitions
│       ├── batch_jobs/         # Remote Kubernetes job submission
│       │   ├── cli.py              # spikelab-batch-jobs CLI
│       │   ├── session.py          # RunSession entry point
│       │   ├── policy.py           # Pre-submission policy checks
│       │   ├── profiles/           # Built-in cluster profiles (YAML)
│       │   └── templates/          # Jinja2 manifest templates
│       └── agent/              # Bundled agent skills (analysis-implementer, …)
│           └── skills/
├── tests/              # Test suite (pytest)
├── docs/               # Sphinx documentation source
├── examples/           # Example scripts and notebooks
├── environment.yml     # Conda environment specification
└── pyproject.toml      # Package configuration
```

## Running tests

```bash
git clone https://github.com/braingeneers/SpikeLab.git
cd SpikeLab
pip install -e ".[dev]"
pytest tests/ -v
```

## Contributing

Contributions are welcome! Please open an issue or pull request on the [GitHub repository](https://github.com/braingeneers/SpikeLab).

All code must be formatted with [Black](https://black.readthedocs.io/). You can check formatting with:

```bash
black --check .
```

## License

SpikeLab is released under the [MIT License](LICENSE).
