Synaptipy API Reference

This document provides reference information for developers using Synaptipy as a library. Autodoc-generated class and function signatures are included below each section alongside usage examples.

Table of Contents


Core Components

Data Model

Core Domain Data Models for Synaptipy.

Defines the central classes representing electrophysiology concepts like Recording sessions and individual data Channels.

class Synaptipy.core.data_model.UndoStack(max_depth=20)[source]

Bases: object

Lightweight state-history stack for non-destructive editing (Command pattern).

Stores deep-copy snapshots of channel data before destructive operations so that a single Channel.undo() call instantly restores the previous state. Memory depth is capped at max_depth entries (oldest entries are evicted).

Usage:

channel.push_undo("apply lowpass 300 Hz")
channel.data_trials = filtered_trials
# ...
channel.undo()  # restores data_trials to state before the filter

Initialise the undo stack.

Parameters:

max_depth – Maximum number of undo levels retained (default 20).

push(label, state)[source]

Save a named state snapshot.

Parameters:
  • label – Human-readable description of the pending change (e.g. "notch 50 Hz").

  • state – Arbitrary serialisable dict representing the channel state to restore.

pop()[source]

Remove and return the most recently saved state.

Returns:

(label, state) tuple, or None if the stack is empty.

can_undo()[source]

Return True if at least one undo level is available.

property depth

Number of undo levels currently stored.

clear()[source]

Discard all saved states.

class Synaptipy.core.data_model.Channel(id, name, units, sampling_rate, data_trials, loader=None)[source]

Bases: object

Represents a single channel of recorded data, potentially across multiple trials or segments.

Initializes a Channel object.

Parameters:
  • id – A unique identifier for the channel (e.g., ‘0’, ‘1’, ‘Vm’).

  • name – A descriptive name for the channel (e.g., ‘Voltage’, ‘IN_0’).

  • units – The physical units of the data (e.g., ‘mV’, ‘pA’, ‘V’).

  • sampling_rate – The sampling frequency in Hz.

  • data_trials – A list where each element is a 1D NumPy array representing the data for one trial/segment.

  • loader – Optional callable/object with load_trial(index) method for lazy loading.

property num_trials

Returns the number of trials/segments available for this channel.

property num_samples

Returns the number of samples in the first trial.

WARNING: This property only checks the first trial. If trials have variable lengths, this value may be misleading. Use get_consistent_samples() for strict validation.

Returns 0 if no trials are present.

get_consistent_samples()[source]

Returns the number of samples per trial, ensuring all trials have the same length. Raises ValueError if trials have different lengths. Returns 0 if no trials.

get_data(trial_index)[source]

Returns the raw data for a specific trial. For lazy loading, this method will load the data from disk if not already loaded.

get_time_vector(trial_index)[source]
get_relative_time_vector(trial_index)[source]
get_averaged_data(trial_indices=None)[source]
get_averaged_time_vector()[source]
get_relative_averaged_time_vector()[source]
get_current_data(trial_index)[source]
get_averaged_current_data()[source]
get_primary_data_label()[source]

Determines a suitable label (‘Voltage’, ‘Current’, ‘Signal’) based on units.

get_data_bounds()[source]

Returns the min and max values across all trials for this channel.

get_finite_data_bounds()[source]

Returns the min and max values across all trials, ensuring they are finite. Returns None if no finite data is found.

push_undo(label='')[source]

Save the current data_trials state so that undo() can restore it.

Call this before any destructive operation (filter, event deletion, …).

Parameters:

label – Short human-readable description of the upcoming change (e.g. "lowpass 300 Hz"). Stored for UI display only.

undo()[source]

Restore data_trials to the last saved state.

Returns:

True if a state was restored, False if the stack was empty.

property can_undo

True when at least one undo level is available.

class Synaptipy.core.data_model.Recording(source_file)[source]

Bases: object

Represents data and metadata loaded from a single recording file. Contains multiple Channel objects.

Initializes a Recording object.

Parameters:

source_file – The Path object pointing to the original data file.

close()[source]

Release any underlying file handles held by the source handle.

Must be called when a recording is removed from the workspace to prevent Neo IO readers from keeping the source file locked.

property num_channels

Returns the number of channels in this recording.

property channel_names

Returns a list of the names of all channels.

property max_trials

Returns the maximum number of trials found across all channels in this recording. Returns 0 if there are no channels or no trials.

class Synaptipy.core.data_model.Experiment[source]

Bases: object

Optional container representing a collection of Recordings, potentially from a single experimental session or related set. (Currently basic).

Usage Example

from Synaptipy.core.data_model import Recording, Channel
from pathlib import Path

recording = Recording(source_file=Path("/path/to/file.abf"))
sampling_rate = recording.sampling_rate
duration = recording.duration
channels = recording.channels
channel = recording.get_channel_by_name("Vm")

File Readers

Adapter for reading various electrophysiology file formats using the neo library and translating them into the application’s core domain model. IO class selection uses a predefined dictionary mapping extensions to IO names.

The read_recording method implements a robust “Header-First” approach: 1. Reads file header first to discover ALL channels with their metadata 2. Creates a definitive channel map before processing any signal data 3. Aggregates data from segments to the correct channels using stable IDs 4. Ensures all channels (including custom-labeled ones) are correctly identified

This approach eliminates assumptions about data structure and ensures that what’s in the file header is what gets loaded, making the software truly versatile for WCP, ABF, and other supported file formats.

Also provides a method to generate a Qt file dialog filter based on its supported IOs.

class Synaptipy.infrastructure.file_readers.neo_adapter.NeoAdapter[source]

Bases: object

Reads ephys files using neo, translating data to the Core Domain Model. Uses a fixed dictionary (IODict) for IO selection and implements a robust “Header-First” approach for channel identification.

The Header-First approach ensures: - All channels are discovered from file header before data processing - Custom channel labels are preserved and correctly mapped - No assumptions are made about data structure - Versatile support for WCP, ABF, and other formats

get_supported_file_filter()[source]

Generates a file filter string for QFileDialog based on the IODict.

get_supported_extensions()[source]

Returns a list of all supported file extensions (e.g. [‘abf’, ‘dat’, …]). Used for filtering file views.

read_recording(filepath, lazy=False, channel_whitelist=None, force_kHz_to_Hz=False)[source]

Reads any neo-supported electrophysiology file and translates it into a robust Recording object. This is the definitive, file-format-agnostic implementation.

Usage Example

from Synaptipy.infrastructure.file_readers.neo_adapter import NeoAdapter

adapter = NeoAdapter()
recording = adapter.read_recording("/path/to/data.abf")
files = adapter.list_compatible_files("/path/to/directory")

Analysis Modules

Synaptipy organises its 15 built-in analysis routines into five core modules. Each module corresponds to a tab in the GUI Analyser and is also available as a composable unit in the batch processing pipeline.

Module 1 - Passive Membrane Properties

Baseline (RMP), Input Resistance, Membrane Time Constant (Tau), Sag Ratio, I-V Curve, and Capacitance.

Core Protocol Module 1: Passive Membrane Properties.

Consolidates: Resting Membrane Potential (RMP), Input Resistance (Rin), Membrane Time Constant (Tau), Sag Ratio, and Capacitance analysis.

All registry wrapper functions return the standard namespaced schema:

{
    "module_used": "passive_properties",
    "metrics": { ... flat result keys ... }
}
Synaptipy.core.analysis.passive_properties.apply_ljp_correction(voltage, ljp_mv)[source]

Return a copy of voltage with the Liquid Junction Potential subtracted.

$V_{true} = V_{recorded} - LJP$

When ljp_mv is 0.0 the original array is returned unchanged to avoid unnecessary allocations in the common case.

Parameters:
  • voltage – 1-D voltage array in mV.

  • ljp_mv – Liquid Junction Potential in mV (positive value shifts voltage downward; negative shifts upward).

Returns:

Corrected voltage array (same dtype as input).

Synaptipy.core.analysis.passive_properties.calculate_rmp(data, time, baseline_window)[source]

Calculate the Resting Membrane Potential (RMP) from a defined baseline window.

Parameters:
  • data – 1D NumPy array of voltage data.

  • time – 1D NumPy array of corresponding time points (seconds).

  • baseline_window – Tuple (start_time, end_time) defining the baseline period.

Returns:

RmpResult object.

Synaptipy.core.analysis.passive_properties.calculate_baseline_stats(time, voltage, start_time, end_time)[source]

Return (mean, std_dev) for a baseline window or None.

Synaptipy.core.analysis.passive_properties.find_stable_baseline(data, sample_rate, window_duration_s=0.5, step_duration_s=0.1)[source]

Find the most stable (lowest variance) baseline segment by sliding window.

Synaptipy.core.analysis.passive_properties.calculate_rin(voltage_trace, time_vector, current_amplitude, baseline_window, response_window, parameters=None, rs_artifact_blanking_ms=0.5)[source]

Calculate Input Resistance (Rin = delta_V / delta_I).

Also computes Peak Rin (from maximum voltage deflection, sensitive to Ih sag) and Steady-State Rin (from the last 20% of the response window, reflecting the true membrane resistance after sag recovery).

Parameters:
  • voltage_trace – 1D voltage array (mV).

  • time_vector – 1D time array (s).

  • current_amplitude – Current step amplitude (pA).

  • baseline_window – (start, end) seconds for baseline.

  • response_window – (start, end) seconds for response.

  • parameters – Optional parameter dict stored in result.

  • rs_artifact_blanking_ms – Duration (ms) to skip at the start of the response window, preventing series-resistance jump contamination (default 0.5 ms).

Returns:

RinResult object with value (mean Rin), rin_peak_mohm, and rin_steady_state_mohm populated.

Synaptipy.core.analysis.passive_properties.calculate_vc_transient_parameters(current_trace, time_vector, step_onset_time, voltage_step_mv, baseline_window_s=0.005, transient_window_ms=20.0)[source]

Fit the capacitive transient from a VC step to extract Rs and Cm.

In voltage-clamp, a command voltage step elicits a capacitive transient whose peak height and time-integral allow decomposition of series resistance (Rs) and whole-cell capacitance (Cm):

Rs = delta_V / I_peak
Cm = Q_transient / delta_V    where Q is charge under the transient
tau_c = Rs * Cm  (capacitive charging time constant)

A mono-exponential is also fit to the transient decay to yield tau_c and a refined Cm_fit estimate.

Parameters:
  • current_trace (np.ndarray) – 1-D current array (pA).

  • time_vector (np.ndarray) – 1-D time array aligned with current_trace (seconds).

  • step_onset_time (float) – Time of the voltage-clamp step onset (seconds).

  • voltage_step_mv (float) – Amplitude of the voltage step (mV). Sign is preserved.

  • baseline_window_s (float) – Duration of the pre-step baseline used to compute holding current (s).

  • transient_window_ms (float) – Duration of the post-step window searched for the transient peak (ms).

Returns:

Keys: rs_mohm, cm_pf, tau_c_ms, cm_fit_pf, transient_peak_pa, transient_charge_pa_s. All values are float(np.nan) when the fit fails.

Return type:

dict

Synaptipy.core.analysis.passive_properties.calculate_cc_series_resistance_fast(voltage_trace, time_vector, step_onset_time, current_step_pa, artifact_window_ms=0.1, tau_ms=None, rin_mohm=None)[source]

Estimate CC series resistance from the fast voltage artifact and derive Cm.

In current clamp, the fast resistive drop at step onset reflects the series (pipette + access) resistance before the membrane charges:

Rs = delta_V_fast / I_step   (first ``artifact_window_ms`` after onset)

When tau_ms and rin_mohm are supplied, CC membrane capacitance is derived as:

Cm = tau_CC / R_in
Parameters:
  • voltage_trace (np.ndarray) – 1-D voltage array (mV).

  • time_vector (np.ndarray) – 1-D time array (s).

  • step_onset_time (float) – Time of the current step onset (seconds).

  • current_step_pa (float) – Amplitude of the injected current step (pA). Must be non-zero.

  • artifact_window_ms (float) – Duration of the initial fast artifact window used to measure the instantaneous voltage drop (ms, default 0.5).

  • tau_ms (float, optional) – Membrane time constant from exponential fit (ms). Required to derive cm_derived_pf.

  • rin_mohm (float, optional) – Input resistance (MOhm). Required to derive cm_derived_pf.

Returns:

Keys: rs_cc_mohm, cm_derived_pf. Values are float(np.nan) when the calculation fails.

Return type:

dict

Synaptipy.core.analysis.passive_properties.calculate_conductance(current_trace, time_vector, voltage_step, baseline_window, response_window, parameters=None)[source]

Calculate Conductance (G = delta_I / delta_V) from a voltage-clamp current trace.

Synaptipy.core.analysis.passive_properties.calculate_iv_curve(sweeps, time_vectors, current_steps, baseline_window, response_window)[source]

Calculate the I-V relationship across multiple sweeps.

Synaptipy.core.analysis.passive_properties.calculate_tau(voltage_trace, time_vector, stim_start_time, fit_duration, model='mono', tau_bounds=None, artifact_blanking_ms=0.5)[source]

Calculate Membrane Time Constant (Tau) by fitting an exponential.

Parameters:
  • voltage_trace – 1D voltage array (mV).

  • time_vector – 1D time array (s).

  • stim_start_time – Stimulus onset time (s).

  • fit_duration – Duration of fit window (s).

  • model – ‘mono’ or ‘bi’.

  • tau_bounds – (min_tau, max_tau) in seconds. Defaults to (1e-4, 1.0).

  • artifact_blanking_ms – Time to skip after stimulus onset (ms).

Returns:

dict with tau_ms, fit_time, fit_values. For model=’bi’: dict with tau_fast_ms, tau_slow_ms, amplitude_fast, amplitude_slow, V_ss, fit_time, fit_values. None if fitting fails fatally.

Return type:

For model=’mono’

Synaptipy.core.analysis.passive_properties.calculate_sag_ratio(voltage_trace, time_vector, baseline_window, response_peak_window, response_steady_state_window, peak_smoothing_ms=5.0, rebound_window_ms=100.0)[source]

Calculate Sag Potential Ratio from a hyperpolarising current step.

Returns dict with keys sag_ratio, sag_percentage, v_peak, v_ss, v_baseline, rebound_depolarization.

Synaptipy.core.analysis.passive_properties.calculate_capacitance_cc(tau_ms, rin_mohm, rs_mohm=None)[source]

Calculate Cell Capacitance (Cm) from Current-Clamp data.

When rs_mohm is provided, the series resistance is subtracted from the input resistance before computing capacitance, giving the corrected formula:

Cm = tau / (Rin - Rs)

Without rs_mohm (or when rs_mohm is None), the simpler approximation Cm = tau / Rin is used and a warning is logged to remind the caller that the result may be over-estimated.

Parameters:
  • tau_ms – Membrane time constant (ms).

  • rin_mohm – Input resistance (MOhm).

  • rs_mohm – Series (access) resistance (MOhm), optional.

Returns:

Membrane capacitance in pF, or None when inputs are invalid.

Synaptipy.core.analysis.passive_properties.calculate_capacitance_vc(current_trace, time_vector, baseline_window, transient_window, voltage_step_amplitude_mv)[source]

Calculate Cm and Rs from a voltage-clamp capacitive transient.

Method: 1. Rs = delta_V / I_peak (Ohm’s law at the instant of the step). 2. Fit a mono-exponential to the transient decay -> tau_transient. 3. Cm = tau_transient / Rs.

Falls back to the charge-integral (AUC) method for Cm when the exponential fit fails.

Parameters:
  • current_trace – 1-D current array (pA).

  • time_vector – Corresponding time array (s).

  • baseline_window – (start, end) in seconds for pre-step baseline.

  • transient_window – (start, end) in seconds covering the capacitive transient.

  • voltage_step_amplitude_mv – Voltage command step (mV). Sign is ignored.

Returns:

Dict with keys capacitance_pf (pF) and series_resistance_mohm (MOhm), or None on failure.

Synaptipy.core.analysis.passive_properties.run_rmp_analysis_wrapper(data_list, time_list, sampling_rate, **kwargs)[source]

Wrapper for RMP analysis. Accepts single or multi-trial data. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.run_sag_ratio_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Sag Ratio analysis. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.run_rin_analysis_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Input Resistance analysis. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.run_tau_analysis_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Tau analysis. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.run_iv_curve_wrapper(data_list, time_list, sampling_rate, **kwargs)[source]

Wrapper for multi-trial I-V Curve analysis. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.run_capacitance_analysis_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Capacitance analysis. Returns namespaced schema.

Synaptipy.core.analysis.passive_properties.passive_properties_module(**kwargs)[source]

Module-level aggregator tab for all passive membrane property analyses.

Module 2 - Single Spike Analysis

Spike Detection and Phase Plane (dV/dt vs V) analysis.

Core Protocol Module 2: Single Spike Analysis.

Consolidates: Spike Detection, AP Characterisation (threshold, amplitude, half-width, rise/decay times, AHP) and Phase Plane (dV/dt vs V) analysis.

All registry wrapper functions return:

{
    "module_used": "single_spike",
    "metrics": { ... flat result keys ... }
}
Synaptipy.core.analysis.single_spike.detect_spikes_threshold(data, time, threshold, refractory_samples, peak_search_window_samples=None, parameters=None, dvdt_threshold=20.0)[source]

Detect spikes based on dV/dt threshold crossing with refractory period.

Parameters:
  • data – 1D voltage array (mV).

  • time – 1D time array (s).

  • threshold – Minimum voltage that the peak must exceed (mV).

  • refractory_samples – Minimum samples between spikes.

  • peak_search_window_samples – Samples to search for peak after crossing.

  • parameters – Optional parameter dict recorded in result.

  • dvdt_threshold – dV/dt threshold for onset detection (V/s, default 20.0).

Returns:

SpikeTrainResult.

Synaptipy.core.analysis.single_spike.calculate_spike_features(data, time, spike_indices, dvdt_threshold=20.0, ahp_window_sec=0.05, onset_lookback=0.01, fahp_window_ms=(1.0, 5.0), mahp_window_ms=(10.0, 50.0))[source]

Calculate detailed features for each detected spike (vectorised NumPy).

Returns list of dicts per spike: ap_threshold, amplitude, half_width, rise_time_10_90, decay_time_90_10, fahp_depth, mahp_depth, ahp_duration_half, adp_amplitude, max_dvdt, min_dvdt.

AP threshold is detected via the peak of d2V/dt2 in the pre-spike lookback window (maximum curvature method). Falls back to the first dV/dt crossing above dvdt_threshold when d2V/dt2 gives a boundary result.

Parameters:
  • data – 1-D voltage array (mV).

  • time – Corresponding time array (s).

  • spike_indices – Array of sample indices for each spike peak.

  • dvdt_threshold – Fallback dV/dt threshold for AP onset (V/s).

  • ahp_window_sec – Duration of AHP/ADP search window (s).

  • onset_lookback – Lookback window before each spike peak (s).

  • fahp_window_ms – (start, end) of fast-AHP window after peak (ms).

  • mahp_window_ms – (start, end) of medium-AHP window after peak (ms).

Synaptipy.core.analysis.single_spike.calculate_isi(spike_times)[source]

Return inter-spike intervals from spike_times array.

Synaptipy.core.analysis.single_spike.analyze_multi_sweep_spikes(data_trials, time_vector, threshold, refractory_samples, dvdt_threshold=20.0)[source]

Detect spikes across multiple sweeps.

Synaptipy.core.analysis.single_spike.calculate_dvdt(voltage, sampling_rate, sigma_ms=0.1)[source]

Calculate dV/dt (V/s) with optional Savitzky-Golay smoothing.

Computes the raw derivative first, then applies a Savitzky-Golay filter (polynomial order 3) directly to the derivative array. This preserves the true max dV/dt better than pre-smoothing the voltage with a Gaussian, which attenuates the sharp upstroke of action potentials.

Parameters:
  • voltage – 1D voltage array (mV).

  • sampling_rate – Sampling rate (Hz).

  • sigma_ms – Smoothing window (ms). The SG window length is derived as max(5, int(sigma_ms / 1000 * sampling_rate)), rounded up to the next odd integer. Set to 0 for no smoothing.

Returns:

1D array of dV/dt in V/s.

Synaptipy.core.analysis.single_spike.get_phase_plane_trajectory(voltage, sampling_rate, sigma_ms=0.1)[source]

Return (voltage, dvdt) phase-plane trajectory.

Synaptipy.core.analysis.single_spike.detect_threshold_kink(voltage, sampling_rate, dvdt_threshold=20.0, kink_slope=10.0, search_window_ms=5.0, peak_indices=None)[source]

Detect AP threshold using the dV/dt kink method.

Returns array of threshold indices.

Synaptipy.core.analysis.single_spike.run_spike_detection_wrapper(data, time, sampling_rate, threshold=-20.0, refractory_period=0.002, peak_search_window=0.005, dvdt_threshold=20.0, ahp_window=0.05, onset_lookback=0.01, **kwargs)[source]

Wrapper for spike detection. Returns namespaced schema.

Synaptipy.core.analysis.single_spike.phase_plane_analysis_wrapper(voltage, time, sampling_rate, sigma_ms=0.1, dvdt_threshold=20.0, **kwargs)[source]

Wrapper for Phase Plane analysis. Returns namespaced schema.

Synaptipy.core.analysis.single_spike.phase_plane_analysis(voltage, time, sampling_rate, sigma_ms=0.1, dvdt_threshold=20.0, **kwargs)

Wrapper for Phase Plane analysis. Returns namespaced schema.

Synaptipy.core.analysis.single_spike.single_spike_module(**kwargs)[source]

Module-level aggregator tab for single-spike analyses.

Module 3 - Firing Dynamics

Excitability (F-I Curve), Burst Analysis, and Spike Train Dynamics.

Core Protocol Module 3: Firing Dynamics.

Consolidates: Excitability (F-I curve), Burst Analysis, and Spike Train

Dynamics into one self-contained module.

All registry wrapper functions return:

{
    "module_used": "firing_dynamics",
    "metrics": { ... flat result keys ... }
}
Synaptipy.core.analysis.firing_dynamics.calculate_fi_curve(sweeps, time_vectors, current_steps=None, threshold=-20.0, refractory_ms=2.0)[source]

Calculate F-I Curve properties from a set of sweeps.

Parameters:
  • sweeps – List of voltage traces (1D arrays).

  • time_vectors – List of corresponding time vectors.

  • current_steps – List of current amplitudes for each sweep. If None, inferred.

  • threshold – Spike detection threshold (mV).

  • refractory_ms – Refractory period (ms).

Returns:

Dictionary with rheobase_pa, fi_slope, max_freq, spike_counts, frequencies, adaptation_ratios, current_steps.

Synaptipy.core.analysis.firing_dynamics.run_excitability_analysis_wrapper(data_list, time_list, sampling_rate, **kwargs)[source]

Wrapper for Excitability Analysis (F-I Curve).

Synaptipy.core.analysis.firing_dynamics.calculate_bursts_logic(spike_times, max_isi_start=0.01, max_isi_end=0.2, min_spikes=2, dynamic_burst=False, burst_isi_fraction=0.3, parameters=None)[source]

Detect bursts in a spike train.

Parameters:
  • spike_times – 1D array of spike times (seconds).

  • max_isi_start – Max ISI to start a burst (s). Ignored when dynamic_burst=True.

  • max_isi_end – Max ISI to continue a burst (s). Ignored when dynamic_burst=True.

  • min_spikes – Minimum spikes per burst.

  • dynamic_burst – When True, compute the mean ISI of the whole train and define the burst boundary as burst_isi_fraction * mean_isi. This abandons hardcoded thresholds in favour of the train’s own temporal structure.

  • burst_isi_fraction – Fraction of mean ISI used as burst boundary when dynamic_burst=True (default 0.3, i.e. 30%).

Returns:

BurstResult object.

Synaptipy.core.analysis.firing_dynamics.analyze_spikes_and_bursts(data, time, sampling_rate, threshold, max_isi_start, max_isi_end, refractory_ms=2.0, dynamic_burst=False, burst_isi_fraction=0.3, parameters=None)[source]

Detect spikes then detect bursts.

Synaptipy.core.analysis.firing_dynamics.run_burst_analysis_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Burst Analysis.

class Synaptipy.core.analysis.firing_dynamics.TrainDynamicsResult(value, unit, is_valid=True, error_message=None, quality_flags=<factory>, metadata=<factory>, spike_count=0, mean_isi_s=None, cv=None, cv2=None, lv=None, adaptation_index=None, isis=None, parameters=<factory>)[source]

Bases: AnalysisResult

Result object for spike train dynamics analysis.

spike_count = 0
mean_isi_s = None
cv = None
cv2 = None
lv = None
adaptation_index = None
isis = None
parameters
Synaptipy.core.analysis.firing_dynamics.calculate_train_dynamics(spike_times)[source]

Compute native spike train statistical metrics.

Parameters:

spike_times – 1D NumPy array of spike times in seconds.

Returns:

TrainDynamicsResult.

Synaptipy.core.analysis.firing_dynamics.run_train_dynamics_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Spike Train Dynamics.

Synaptipy.core.analysis.firing_dynamics.firing_dynamics_module(**kwargs)[source]

Module-level aggregator tab for firing-dynamics analyses.

Module 4 - Synaptic Events

Threshold-based detection, Template Match detection, and Baseline-Peak detection.

Core Protocol Module 4: Synaptic Events.

Consolidates all synaptic event detection methods (adaptive threshold, template matching, baseline-peak-kinetics) from event_detection.py into one self-contained module.

All registry wrapper functions return:

{
    "module_used": "synaptic_events",
    "metrics": { ... flat result keys ... }
}

Exports detect_minis_threshold as a backward-compatibility alias.

Synaptipy.core.analysis.synaptic_events.find_quiescent_baseline_rms(data, sample_rate, window_ms=20.0)[source]

Identify the quietest (minimum-variance) segment in a trace via a sliding window and return its RMS as the noise floor.

Unlike a fixed pre-trace window (e.g. trace[0:50]), this approach is robust to recordings with spontaneous activity at the start: the search considers the entire trace, selecting the 20 ms chunk with the smallest variance regardless of its position.

Parameters:
  • data – 1D signal array (mV or pA).

  • sample_rate – Sampling rate (Hz).

  • window_ms – Duration of the sliding window (ms, default 20).

Returns:

Tuple of (rms_noise_floor, (start_idx, end_idx)) where the indices define the quiescent window used for the RMS calculation.

Synaptipy.core.analysis.synaptic_events.calculate_event_charge_dynamic(data, event_index, sample_rate, local_baseline, polarity='negative', max_duration_ms=100.0)[source]

Integrate event charge (area under curve) with a dynamic boundary.

The integration ends at whichever comes first:

  • The signal returns to local_baseline (event complete).

  • A large derivative transient indicates the onset of a subsequent summating event (onset detected as dV/dt > 3x the noise in the early derivative).

Parameters:
  • data – 1D signal array.

  • event_index – Sample index of the event peak.

  • sample_rate – Sampling rate (Hz).

  • local_baseline – Local baseline voltage/current level.

  • polarity"negative" or "positive".

  • max_duration_ms – Hard cap on integration window (ms).

Returns:

Signed charge (area under curve relative to baseline, in units of data * seconds, e.g. mV·s or pA·s).

Synaptipy.core.analysis.synaptic_events.fit_biexponential_decay(data, event_index, sample_rate, local_baseline, polarity='negative', fit_window_ms=80.0)[source]

Fit mono- and bi-exponential decays to a synaptic event.

Tries a bi-exponential fit first. Falls back to mono-exponential when the bi-exp fit does not converge or yields non-physical parameters (negative amplitudes or time constants).

Bi-exponential model (relative to baseline):

f(t) = A_fast * exp(-t / tau_fast) + A_slow * exp(-t / tau_slow)

with A_fast + A_slow normalised by the event peak amplitude at t=0.

Parameters:
  • data – 1-D signal array.

  • event_index – Sample index of the event peak.

  • sample_rate – Sampling rate (Hz).

  • local_baseline – Local baseline level (same units as data).

  • polarity"negative" or "positive".

  • fit_window_ms – Maximum duration of the decay segment to fit (ms).

Returns:

  • tau_mono_ms – mono-exp time constant (ms)

  • tau_fast_ms – fast component time constant (ms); None if bi-exp did not converge

  • tau_slow_ms – slow component time constant (ms); None if bi-exp did not converge

  • bi_exp_converged – True when bi-exp fit was accepted

  • decay_fit_errorNone when fit succeeded; error message string otherwise

Return type:

Dict with keys

Synaptipy.core.analysis.synaptic_events.compute_local_pre_event_baseline(data, event_indices, sample_rate, pre_event_window_ms=2.0, polarity='negative')[source]

Compute a local pre-event baseline voltage for each detected event.

For summ ating synaptic events that ride on the decay of a previous event, the global resting potential is a poor amplitude reference. This function searches the pre_event_window_ms immediately preceding each event peak and returns the local “foot” voltage:

  • Negative polarity: the maximum (most depolarised) voltage in the search window - i.e. the point before the hyperpolarising/inward current event begins to deflect the trace.

  • Positive polarity: the minimum (most hyperpolarised) voltage.

Parameters:
  • data – 1D voltage/current array.

  • event_indices – Integer indices of detected event peaks.

  • sample_rate – Sampling rate in Hz.

  • pre_event_window_ms – Duration (ms) of the search window before each peak (default 2.0 ms, valid range 1-3 ms recommended).

  • polarity – “negative” or “positive”.

Returns:

1D float array of local baseline values, one per event.

Synaptipy.core.analysis.synaptic_events.calculate_paired_pulse_ratio(data, time, stimulus_onsets_s, sample_rate, response_window_ms=20.0, polarity='negative')[source]

Calculate the Paired-Pulse Ratio (PPR) with residual decay correction.

A naive amplitude ratio (P2/P1) is contaminated by the unresolved decay tail of the first response. This function:

  1. Fits a mono-exponential decay to the P1 tail.

  2. Evaluates the fitted tail at the P2 onset to obtain the residual baseline offset.

  3. Measures P2 amplitude relative to the corrected (residual-subtracted) baseline instead of the global baseline.

PPR = A2_corrected / A1

Parameters:
  • data (np.ndarray) – 1-D signal array (pA or mV).

  • time (np.ndarray) – 1-D time array (seconds), same length as data.

  • stimulus_onsets_s (np.ndarray) – 1-D array of stimulus onset times (seconds). At least two are required.

  • sample_rate (float) – Sampling rate (Hz).

  • response_window_ms (float) – Duration (ms) of the post-stimulus window searched for the peak.

  • polarity (str) – "negative" for inward currents/hyperpolarisations; "positive" for depolarisations.

Returns:

Keys:

  • ppr – corrected P2/P1 amplitude ratio

  • ppr_naive – uncorrected P2/P1 ratio (no residual subtraction)

  • amplitude_p1 – P1 amplitude relative to pre-stimulus baseline

  • amplitude_p2_corrected – P2 amplitude after residual subtraction

  • amplitude_p2_naive – P2 amplitude relative to global baseline

  • residual_at_p2 – predicted P1 tail amplitude at P2 onset

  • tau_p1_ms – time constant of the P1 decay fit (ms)

  • interpulse_interval_ms – interval between S1 and S2 (ms)

  • error – error message string when calculation fails

Return type:

dict

Synaptipy.core.analysis.synaptic_events.detect_events_threshold(data, time, threshold, polarity='negative', refractory_period=0.002, rolling_baseline_window_ms=100.0, artifact_mask=None, use_quiescent_noise_floor=True, quiescent_window_ms=20.0)[source]

Detect events using topological prominence to handle shifting baselines.

By default uses a quiescent-noise-floor estimate: the RMS of the minimum-variance 20 ms chunk in the trace is used to set a dynamic noise threshold, preventing false positives even when spontaneous activity dominates the beginning of the recording.

Synaptipy.core.analysis.synaptic_events.detect_minis_threshold(data, time, threshold, polarity='negative', refractory_period=0.002, rolling_baseline_window_ms=100.0, artifact_mask=None, use_quiescent_noise_floor=True, quiescent_window_ms=20.0)

Detect events using topological prominence to handle shifting baselines.

By default uses a quiescent-noise-floor estimate: the RMS of the minimum-variance 20 ms chunk in the trace is used to set a dynamic noise threshold, preventing false positives even when spontaneous activity dominates the beginning of the recording.

Synaptipy.core.analysis.synaptic_events.run_event_detection_threshold_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for adaptive threshold event detection.

Synaptipy.core.analysis.synaptic_events.detect_events_template(data, sampling_rate, threshold_std, tau_rise, tau_decay, polarity='negative', rolling_baseline_window_ms=100.0, artifact_mask=None, time=None, min_event_distance_ms=0.0)[source]

Detect events using a multi-kernel matched-filter bank.

Three kernels are built using the specified tau_rise and tau_decay × 1, 2, 3 to tolerate dendritic filtering that prolongs event decay (Cable theory predicts a ~2-3× slowdown for distal inputs). A combined z-score trace (pointwise maximum across the three filtered traces) is used for peak detection, improving sensitivity to both somatic and dendritic events.

Synaptipy.core.analysis.synaptic_events.run_event_detection_template_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for template-matching event detection.

Synaptipy.core.analysis.synaptic_events.detect_events_baseline_peak_kinetics(data, sample_rate, direction='negative', baseline_window_s=0.5, baseline_step_s=0.1, threshold_sd_factor=3.0, filter_freq_hz=None, min_event_separation_ms=5.0, auto_baseline=True, rolling_baseline_window_ms=0.0)[source]

Detect events via stable-baseline estimation then prominence-based peak finding.

Synaptipy.core.analysis.synaptic_events.run_event_detection_baseline_peak_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for baseline-peak event detection.

Synaptipy.core.analysis.synaptic_events.synaptic_events_module(**kwargs)[source]

Module-level aggregator tab for synaptic event detection analyses.

Module 5 - Evoked Responses (Optogenetics)

TTL-gated optogenetic stimulus synchronisation, latency, response probability, and jitter.

Core Protocol Module 5: Evoked Responses.

Consolidates optogenetic stimulus synchronization (TTL-gated latency, probability, jitter analysis) from optogenetics.py.

All registry wrapper functions return:

{
    "module_used": "evoked_responses",
    "metrics": { ... flat result keys ... }
}
class Synaptipy.core.analysis.evoked_responses.OptoSyncResult(value, unit, is_valid=True, error_message=None, quality_flags=<factory>, metadata=<factory>, optical_latency_ms=None, response_probability=None, spike_jitter_ms=None, stimulus_count=0, success_count=0, failure_count=0, stimulus_onsets=None, stimulus_offsets=None, responding_spikes=<factory>, parameters=<factory>)[source]

Bases: AnalysisResult

Result object for optogenetic synchronization analysis.

optical_latency_ms = None
response_probability = None
spike_jitter_ms = None
stimulus_count = 0
success_count = 0
failure_count = 0
stimulus_onsets = None
stimulus_offsets = None
responding_spikes
parameters
Synaptipy.core.analysis.evoked_responses.extract_ttl_epochs(ttl_data, time, threshold=2.5, auto_threshold=True)[source]

Extract rising and falling edges of a digital TTL signal.

Returns:

Tuple of (onsets, offsets) arrays in seconds.

Synaptipy.core.analysis.evoked_responses.calculate_optogenetic_sync(ttl_data, action_potential_times, time, ttl_threshold=2.5, response_window_ms=20.0)[source]

Correlate TTL stimuli with action potential times.

Parameters:
  • ttl_data – Digital signal data trace.

  • action_potential_times – Pre-calculated spike/event times (seconds).

  • time – Timestamps of the trace.

  • ttl_threshold – Voltage threshold for TTL edge detection.

  • response_window_ms – Search window for APs after stimulus onset (ms).

Returns:

OptoSyncResult.

Synaptipy.core.analysis.evoked_responses.calculate_paired_pulse_ratio(data, time, stim1_onset_s, stim2_onset_s, response_window_ms=20.0, baseline_window_ms=5.0, fit_decay_from_ms=5.0, fit_decay_window_ms=30.0, polarity='negative', artifact_blanking_ms=1.0)[source]

Calculate Paired-Pulse Ratio with residual decay subtraction.

Without subtracting the residual exponential decay of the first event under the second stimulus window, the measured amplitude of the second response is artificially inflated (facilitation) or deflated (depression), yielding biologically invalid PPR values.

Algorithm:

  1. Measure amplitude of response 1 (R1) relative to its local pre-stimulus baseline.

  2. Fit a mono-exponential decay to the tail of R1 (from fit_decay_from_ms to fit_decay_window_ms after stim1_onset).

  3. Extrapolate the decay curve to estimate the residual baseline level at stim2_onset.

  4. Measure amplitude of response 2 (R2_raw) relative to its own pre-stimulus sample.

  5. Subtract the residual decay value from R2_raw to obtain R2_corrected.

  6. Return paired_pulse_ratio = R2_corrected / R1.

Parameters:
  • data – 1-D voltage/current array (mV or pA).

  • time – 1-D time array (s).

  • stim1_onset_s – Time of first stimulus onset (s).

  • stim2_onset_s – Time of second stimulus onset (s).

  • response_window_ms – Duration after each stimulus to search for peak (ms).

  • baseline_window_ms – Pre-stimulus baseline window (ms) to compute local baseline for each response.

  • fit_decay_from_ms – Offset from stim1_onset to start fitting decay (ms). Should be after the initial transient.

  • fit_decay_window_ms – Window duration for decay fit (ms).

  • polarity"negative" (inward/downward events, e.g. EPSCs) or "positive".

  • artifact_blanking_ms – Duration (ms) after each stimulus onset to ignore when searching for the peak response (default 1.0). Prevents the stimulus shock-wave artefact from being identified as the biological response peak.

Returns:

  • r1_amplitude – amplitude of first response (baseline-subtracted)

  • r2_amplitude_raw – raw amplitude of second response

  • r2_amplitude_corrected – R2 after subtracting residual decay

  • residual_at_stim2 – estimated residual baseline at stim2_onset

  • paired_pulse_ratio – R2_corrected / R1

  • decay_tau_ms – time constant of first event decay (ms)

  • ppr_error – None on success; error string on failure

Return type:

Dict with keys

Synaptipy.core.analysis.evoked_responses.run_opto_sync_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for optogenetic synchronization analysis.

Correlates TTL/optical stimulus pulses with detected events.

Synaptipy.core.analysis.evoked_responses.run_ppr_wrapper(data, time, sampling_rate, **kwargs)[source]

Wrapper for Paired-Pulse Ratio analysis with residual decay subtraction.

Synaptipy.core.analysis.evoked_responses.evoked_responses_module(**kwargs)[source]

Module-level aggregator tab for evoked-response analyses.

Analysis Registry

The central decorator-based registry that maps named analysis functions to the GUI and batch engine.

Analysis Registry for dynamic function registration and lookup.

This module provides a registry pattern that allows analysis functions to register themselves via decorators, enabling flexible pipeline configuration.

class Synaptipy.core.analysis.registry.AnalysisRegistry[source]

Bases: object

Registry for analysis functions.

Functions can be registered using the @AnalysisRegistry.register decorator, and then retrieved by name for use in batch processing pipelines.

classmethod register(name, type='analysis', **kwargs)[source]

Decorator to register an analysis or preprocessing function.

Parameters:
  • name – Unique identifier for the function (e.g., “spike_detection”)

  • type – The type of function (“analysis” or “preprocessing”)

  • **kwargs – Additional metadata to store with the function (e.g., ui_params)

Returns:

Decorator function

Example:

@AnalysisRegistry.register("spike_detection", ui_params=[...])
def run_spike_detection(data, time, sampling_rate, **kwargs):
    # ... analysis logic ...
    return results_dict
classmethod register_processor(name, **kwargs)[source]

Decorator to register a preprocessing function. Alias for register(name, type="preprocessing", **kwargs).

classmethod get_function(name)[source]

Retrieve a registered analysis function by name.

Parameters:

name – The registered name of the function

Returns:

The registered function, or None if not found

classmethod get_metadata(name)[source]

Retrieve metadata for a registered analysis function.

Parameters:

name – The registered name of the function

Returns:

Dictionary of metadata, or empty dict if not found

classmethod list_registered()[source]

Get a list of all registered analysis function names.

Returns:

List of registered function names

classmethod list_by_type(type_str)[source]

Get registered function names filtered by type.

Parameters:

type_str – The type to filter by (e.g., “analysis”, “preprocessing”)

Returns:

List of function names matching the given type

classmethod list_preprocessing()[source]

Get all registered preprocessing function names.

classmethod list_analysis()[source]

Get all registered analysis function names (excludes preprocessing).

classmethod mark_core_snapshot()[source]

Record the current registry keys as the immutable core set.

Call this once after importing the built-in analysis package but before loading any external plugins. unregister_plugins() uses this snapshot to know which entries must never be removed.

classmethod unregister_plugins()[source]

Remove all analyses that are NOT part of the core package.

Safe to call multiple times. Only affects entries added after the last mark_core_snapshot() call (i.e. plugin-contributed entries).

classmethod clear()[source]

Clear all registered functions (mainly for testing).

classmethod update_default_params(analysis_name, new_defaults)[source]

Update default values for a registered analysis.

Parameters:
  • analysis_name – Name of anylsis

  • new_defaults – Dictionary of {param_name: new_value}

classmethod reset_to_factory(analysis_name=None)[source]

Reset metadata to factory defaults. If analysis_name is None, resets ALL.

Usage Example - Registry

import Synaptipy.core.analysis  # triggers all @register decorators
from Synaptipy.core.analysis.registry import AnalysisRegistry

names = AnalysisRegistry.list_registered()
meta = AnalysisRegistry.get_metadata("sag_ratio_analysis")
func = AnalysisRegistry.get_function("sag_ratio_analysis")
result = func(data, time, sampling_rate, baseline_start=0.0, baseline_end=0.1)

Batch Processing

Batch Analysis Engine for Synaptipy. Handles processing multiple files and aggregating results using a flexible registry-based pipeline.

The engine uses a registry-based architecture where analysis functions register themselves via decorators, and the pipeline configuration defines what analyses to run on which data scopes.

Output Design Principles

  1. Every row is fully traceable to its source (file, channel, trial, analysis).

  2. Metadata columns appear first; analysis results in the middle; internal/debug last.

  3. Scalar results live in their own columns; array values are summarised for tabular compatibility (Excel, Origin, R, MATLAB) and the raw arrays are kept under private _-prefixed keys that are stripped during CSV export.

  4. Channel physical units are always recorded so downstream scripts can auto-label axes.

  5. Recording-level metadata (protocol, duration, session time) is propagated when available.

Author: Anzal K Shahul <anzal.ks@gmail.com>

class Synaptipy.core.analysis.batch_engine.BatchAnalysisEngine(neo_adapter=None, max_workers=1)[source]

Bases: object

Engine for running analysis across multiple files/recordings using a flexible pipeline.

The engine uses a registry-based architecture where analysis functions register themselves via decorators, and the pipeline configuration defines what analyses to run on which data scopes.

Example:

engine = BatchAnalysisEngine()
files = [Path("file1.abf"), Path("file2.abf")]
pipeline = [
    {
        'analysis': 'spike_detection',
        'scope': 'all_trials',
        'params': {'threshold': -15.0, 'refractory_ms': 2.0}
    },
    {
        'analysis': 'rmp_analysis',
        'scope': 'average',
        'params': {'baseline_start': 0.0, 'baseline_end': 0.1}
    }
]
results_df = engine.run_batch(files, pipeline)

Initialize the batch analysis engine.

Parameters:
  • neo_adapter – Optional NeoAdapter instance. If None, creates a new one.

  • max_workers – Number of parallel worker processes for run_batch(). 1 (default) means fully sequential execution. Values > 1 enable ProcessPoolExecutor parallelism. Pass -1 to use all available CPU cores.

cancel()[source]

Request cancellation of the current batch run.

update_performance_settings(settings)[source]

Dynamically update performance limits without restarting.

Reads max_cpu_cores from settings and updates max_workers immediately so the next run_batch() call picks up the new value. This is the subscriber side of the pub/sub preferences_changed signal.

Parameters:

settings – Dict that may contain "max_cpu_cores" (int) and/or "max_ram_allocation_gb" (float, logged but not enforced here).

static list_available_analyses()[source]

Get a list of all registered analysis function names.

Returns:

List of available analysis names.

static get_analysis_info(name)[source]

Get information about a registered analysis function.

Parameters:

name – The registered name of the analysis function.

Returns:

Dictionary with function info (docstring, etc.) or None if not found.

run_batch(files, pipeline_config, progress_callback=None, channel_filter=None, rs_tolerance=0.2)[source]

Run analysis on a list of files/recordings using a flexible pipeline configuration.

When max_workers > 1 and files contains at least two items, the file-level loop is distributed across worker processes via ProcessPoolExecutor. The GUI thread is never blocked in either mode — callers should wrap this in a BatchWorker QThread.

Parameters:
  • files – List of file paths OR Recording objects to process.

  • pipeline_config – List of task dictionaries.

  • progress_callback – Optional callback (current, total, status_msg).

  • channel_filter – Optional list of channel names/IDs to process.

  • rs_tolerance – Maximum fractional increase in series resistance compared to the first valid Rs measurement before a sweep is flagged with rs_qc_warning. Default 0.20 (20 %). Set to float('inf') to disable the check.

Returns:

pandas DataFrame containing aggregated results with metadata.

Usage Example

from Synaptipy.core.analysis.batch_engine import BatchAnalysisEngine
from pathlib import Path

engine = BatchAnalysisEngine(max_workers=4)

pipeline = [
    {
        'analysis': 'spike_detection',
        'scope': 'all_trials',
        'params': {'threshold': -20.0, 'refractory_ms': 2.0}
    },
]

files = [Path("file1.abf"), Path("file2.abf")]
results_df = engine.run_batch(files, pipeline)

Epoch Manager

EpochManager - Hardware TTL and manual experimental epoch management.

Experimental recordings often contain distinct phases (Baseline, Stimulation, Washout). EpochManager either auto-detects these boundaries from a TTL/Digital-Input channel or lets the researcher define them manually.

Once epochs are defined, per-epoch data slices can be extracted from any Channel for downstream analysis (e.g. tracking plasticity changes across Stim vs. Baseline).

class Synaptipy.core.analysis.epoch_manager.Epoch(name, start_time, end_time, epoch_type='manual', metadata=<factory>)[source]

Bases: object

A named time window within a recording.

name

Human-readable label (e.g. "Baseline", "Stim", "Washout").

Type:

str

start_time

Epoch start in seconds (relative to recording onset).

Type:

float

end_time

Epoch end in seconds.

Type:

float

epoch_type

Either "ttl" (auto-detected from hardware) or "manual".

Type:

str

metadata

Optional arbitrary key/value annotations.

Type:

Dict[str, Any]

name
start_time
end_time
epoch_type = 'manual'
metadata
property duration

Epoch duration in seconds.

contains(t)[source]

Return True if time t falls within [start_time, end_time].

class Synaptipy.core.analysis.epoch_manager.EpochManager[source]

Bases: object

Manage experimental epoch boundaries for a recording.

Epochs are ordered by Epoch.start_time. Overlapping epochs are allowed so that the same window can be labelled with multiple semantic tags.

Typical workflow:

em = EpochManager()

# Option A: auto-detect from a TTL channel
em.from_ttl(ttl_data, time_vector, pre_stim_s=1.0, post_stim_s=1.0)

# Option B: manual definition
em.add_manual_epoch("Baseline", 0.0, 60.0)
em.add_manual_epoch("Stim",     60.0, 120.0)
em.add_manual_epoch("Washout",  120.0, 300.0)

# Slice channel data by epoch
slices = em.get_epoch_slices(channel, trial_index=0)
DEFAULT_EPOCH_NAMES = ('Baseline', 'Stim', 'Washout')
property epochs

Sorted list of all defined epochs.

property epoch_names

Names of all defined epochs in time order.

add_manual_epoch(name, start_time, end_time, **metadata)[source]

Add a manually defined epoch.

Parameters:
  • name – Label for the epoch.

  • start_time – Start time in seconds.

  • end_time – End time in seconds.

  • **metadata – Optional key/value annotations stored in Epoch.metadata.

Returns:

The newly created Epoch.

Raises:

ValueError – If end_time <= start_time.

from_ttl(ttl_data, time, ttl_threshold=2.5, pre_stim_s=1.0, post_stim_s=1.0, min_inter_epoch_s=0.5, stim_name='Stim', baseline_name='Baseline', washout_name='Washout')[source]

Auto-generate epochs from a TTL / Digital-Input channel.

Detects TTL pulse boundaries using extract_ttl_epochs(), then creates:

  • A Baseline epoch from time[0] to first_onset - pre_stim_s.

  • A Stim epoch spanning the detected TTL activity (first_onset - pre_stim_s to last_offset + post_stim_s).

  • A Washout epoch from last_offset + post_stim_s to time[-1], if enough time remains.

Returns:

List of the newly created Epoch objects. The manager’s epochs list is also updated in place.

get_epoch(name)[source]

Return the first epoch whose name matches name (case-insensitive).

epochs_at_time(t)[source]

Return all epochs that contain time t.

get_epoch_slices(channel, trial_index=0)[source]

Extract (data, time) slices for every epoch from a channel trial.

Parameters:
  • channel – A Channel instance.

  • trial_index – Which trial to slice (default 0).

Returns:

Dict mapping epoch name to (data_slice, time_slice) ndarrays. Epochs with no overlapping samples map to empty arrays.

remove_epoch(name)[source]

Remove the first epoch matching name.

Returns:

True if an epoch was removed, False if not found.

clear()[source]

Remove all epochs.


Signal Processing

Signal processing utilities for Synaptipy. Includes filtering and trace quality checks.

Synaptipy.core.signal_processor.validate_sampling_rate(fs)[source]

Validate sampling rate and warn if suspiciously low.

Parameters:

fs – Sampling rate in Hz.

Returns:

True if valid (positive), False otherwise.

Synaptipy.core.signal_processor.check_trace_quality(data, sampling_rate)[source]

Assess the quality of a recording trace.

Checks for: - Signal-to-Noise Ratio (SNR) estimation - Baseline Drift - 50/60Hz Line Noise contamination

Parameters:
  • data – 1D numpy array of the signal (e.g., voltage in mV or current in pA).

  • sampling_rate – Sampling rate in Hz.

Returns:

Dictionary containing quality metrics and flags.

Synaptipy.core.signal_processor.bandpass_filter(data, lowcut, highcut, fs, order=5)[source]

Apply a Butterworth bandpass filter to the data. Uses Second Order Sections (SOS) for numerical stability.

Parameters:
  • data – Input signal array

  • lowcut – Low cutoff frequency in Hz

  • highcut – High cutoff frequency in Hz

  • fs – Sampling frequency in Hz

  • order – Filter order (1-10, default 5)

Returns:

Filtered data, or original data if filtering fails

Synaptipy.core.signal_processor.lowpass_filter(data, cutoff, fs, order=5)[source]

Apply a Butterworth lowpass filter. Uses Second Order Sections (SOS) for numerical stability.

Parameters:
  • data – Input signal array

  • cutoff – Cutoff frequency in Hz

  • fs – Sampling frequency in Hz

  • order – Filter order (1-10, default 5)

Returns:

Filtered data, or original data if filtering fails

Synaptipy.core.signal_processor.highpass_filter(data, cutoff, fs, order=5)[source]

Apply a Butterworth highpass filter. Uses Second Order Sections (SOS) for numerical stability.

Parameters:
  • data – Input signal array

  • cutoff – Cutoff frequency in Hz

  • fs – Sampling frequency in Hz

  • order – Filter order (1-10, default 5)

Returns:

Filtered data, or original data if filtering fails

Synaptipy.core.signal_processor.notch_filter(data, freq, Q, fs)[source]

Apply a notch filter to remove a specific frequency. Uses SOS format via zpk2sos for numerical stability.

Parameters:
  • data – Input signal array

  • freq – Notch frequency in Hz

  • Q – Quality factor (higher = narrower notch)

  • fs – Sampling frequency in Hz

Returns:

Filtered data, or original data if filtering fails

Synaptipy.core.signal_processor.comb_filter(data, freq, Q, fs)[source]

Apply an IIR comb filter to remove a fundamental frequency and its harmonics. Useful for line noise (e.g., 50Hz or 60Hz).

Parameters:
  • data – Input signal array

  • freq – Fundamental frequency to remove in Hz (e.g., 50 or 60)

  • Q – Quality factor (higher = narrower notches)

  • fs – Sampling frequency in Hz

Returns:

Filtered data, or original data if filtering fails

Synaptipy.core.signal_processor.subtract_baseline_mode(data, decimals=None)[source]

Subtract baseline using the mode of the distribution of values.

Parameters:
  • data – Input signal array

  • decimals – Number of decimal places to round to for mode calculation. If None, it tries to infer a reasonable precision or defaults to 1.

Returns:

Data with baseline subtracted (aligned to 0)

Synaptipy.core.signal_processor.subtract_baseline_mean(data)[source]

Subtract the mean of the entire signal.

Synaptipy.core.signal_processor.subtract_baseline_median(data)[source]

Subtract the median of the entire signal.

Synaptipy.core.signal_processor.subtract_baseline_linear(data)[source]

Subtract a linear trend (detrend) from the signal. Useful for removing drift.

Synaptipy.core.signal_processor.subtract_baseline_region(data, t, start_t, end_t)[source]

Subtract the mean value calculated from a specific time window.

Parameters:
  • data – Signal array

  • t – Time vector (must be same length as data)

  • start_t – Start time of baseline window

  • end_t – End time of baseline window

Synaptipy.core.signal_processor.blank_artifact(data, time_vector, onset_time, duration_ms, method='hold')[source]

Suppress a stimulus artifact by replacing a time window.

Three interpolation modes are available:

  • "hold" — replace the artifact window with the last pre-artifact sample value (sample-and-hold).

  • "zero" — set the artifact window to zero.

  • "linear" — linearly interpolate between the pre- and post-artifact boundary values.

Parameters:
  • data – 1-D signal array.

  • time_vector – 1-D time array (same length as data), in seconds.

  • onset_time – Start of the artifact window, in seconds.

  • duration_ms – Duration of the artifact window, in milliseconds.

  • method – Interpolation mode — "hold", "zero", or "linear". Default "hold".

Returns:

Copy of data with the artifact window replaced.

Raises:

ValueError – If method is not one of the recognised modes.

Synaptipy.core.signal_processor.find_artifact_windows(data, fs, slope_threshold, padding_ms=2.0)[source]

Identify time windows containing high-slope artifacts.

Algorithm: 1. Calculate absolute gradient of the data. 2. Threshold gradient to find high-slope points. 3. Dilate the boolean mask by padding_ms to capture the artifact tail/ringing.

Parameters:
  • data – Signal array.

  • fs – Sampling rate in Hz.

  • slope_threshold – Threshold for the absolute gradient (e.g. pA/sample or mV/sample).

  • padding_ms – Time to expand the mask around detected peaks (in milliseconds).

Returns:

Boolean mask of the same shape as data, where True indicates an artifact.

Synaptipy.core.signal_processor.compute_psd(data, sampling_rate, nperseg=None, window='hann')[source]

Compute Power Spectral Density (PSD) using Welch’s method.

Parameters:
  • data – 1D signal array.

  • sampling_rate – Sampling rate in Hz.

  • nperseg – FFT segment length. Defaults to min(len(data), 4096).

  • window – Window function name passed to scipy.signal.welch() (default "hann").

Returns:

Tuple (frequencies, psd) where frequencies is in Hz and psd is in (data_units)^2/Hz. Both arrays are 1-D float64. On failure or missing scipy an empty-array pair is returned.

Synaptipy.core.signal_processor.multi_harmonic_notch(data, fundamental_hz, fs, max_harmonics=None, Q=30.0)[source]

Strip a fundamental frequency and its harmonics using cascaded notch filters.

Applies a discrete notch at fundamental_hz, 2 * fundamental_hz, 3 * fundamental_hz, …, up to the Nyquist limit (or max_harmonics, whichever comes first).

Prefer comb_filter() (IIR comb via scipy.signal.iircomb()) when the scipy version supports it. This function falls back to cascaded notch_filter() calls, which is always available.

Parameters:
  • data – Input signal array.

  • fundamental_hz – Fundamental line-noise frequency to remove (e.g. 50 or 60).

  • fs – Sampling rate in Hz.

  • max_harmonics – Maximum number of harmonics to remove including the fundamental. None means remove all harmonics below Nyquist.

  • Q – Quality factor for each notch (higher = narrower). Default 30.

Returns:

Filtered signal, or original data if filtering is impossible.

Processing Pipeline

Signal Processing Pipeline.

Formalizes the order of operations for signal processing (e.g., Baseline -> Filter). Ensures that both visualization and analysis use the exact same processing sequence.

class Synaptipy.core.processing_pipeline.SignalProcessingPipeline[source]

Bases: object

Manages an ordered list of signal processing steps.

add_step(step_config, index=None)[source]

Add a processing step to the pipeline.

Parameters:
  • step_config – Dictionary defining the step (e.g., {‘type’: ‘baseline’, ‘method’: ‘mean’})

  • index – Optional index to insert at. If None, appends to end.

remove_step_by_type(step_type)[source]

Remove all steps of a specific type (e.g. ‘baseline’).

clear()[source]

Clear all steps.

get_steps()[source]

Return a copy of the current steps.

set_steps(steps)[source]

Replace all steps.

process(data, fs, time_vector=None)[source]

Apply all steps in order to the data.

Parameters:
  • data – Input signal array

  • fs – Sampling rate in Hz

  • time_vector – Optional time vector (required for region-based baseline)

Returns:

Processed data array

Synaptipy.core.processing_pipeline.apply_trace_corrections(data, time, fs, *, ljp_mv=0.0, pn_traces=None, pn_scale=1.0, pre_event_window_s=None, artifact_interp_steps=None, filter_steps=None)[source]

Apply the immutable five-step trace correction in a guaranteed order.

Regardless of the order the user toggles settings in the GUI, this function must be used as the single entry point for all backend corrections so that the execution order is always:

Step A - LJP Voltage Offset

V_true = V_recorded - ljp_mv

Step B - P/N Leak Subtraction

If pn_traces is supplied, compute the per-sample mean across the sub-threshold repetitions, scale by pn_scale, and subtract from the corrected trace. This removes capacitive transients and steady-state leak currents without affecting the signal of interest.

Step C - Scalar Noise-Floor Zeroing

Subtract the median of the user-specified pre-event window pre_event_window_s = (t_start, t_end). Because the LJP and P/N corrections have already been applied, this median reflects only the residual noise floor, not a physiological offset.

Step D - Pre-filter Artifact Interpolation

Linearly interpolate across each stimulus artifact defined in artifact_interp_steps. Running this after A-C and before filtering prevents Gibbs ringing: the DSP filter operates on an already-flat waveform without sharp transient edges.

Step E - DSP Filtering

Apply any filters listed in filter_steps (same dict schema as SignalProcessingPipeline: {'type': 'filter', 'method': 'lowpass', 'cutoff': 1000, 'order': 5}). Running filters after A-D prevents edge artefacts from the transient subtraction from being smeared across the waveform.

Parameters:
  • data – Raw (uncorrected) signal array.

  • time – Time vector aligned with data (seconds).

  • fs – Sampling rate in Hz.

  • ljp_mv – Liquid Junction Potential in mV. Step A only runs when ljp_mv != 0.0.

  • pn_traces – 2-D array of shape (n_sweeps, n_samples) containing the sub-threshold P/N sweeps. Step B is skipped when pn_traces is None.

  • pn_scale – Scalar factor applied to the averaged P/N template before subtraction (default 1.0).

  • pre_event_window_s(t_start, t_end) tuple in seconds. Step C is skipped when this is None.

  • artifact_interp_steps – List of artifact dicts with keys onset_time (s) and duration_ms (ms). Each defines a stimulus artifact to linearly interpolate. Step D is skipped when None.

  • filter_steps – List of filter dicts consumed by SignalProcessingPipeline.process(). Step E is skipped when the list is empty or None.

Returns:

Corrected signal array (always a copy — the input is never mutated).


Plugin System

Plugin Manager for Synaptipy.

Scans two plugin directories and dynamically loads external Python scripts. Any script using the @AnalysisRegistry.register decorator will automatically populate the UI and Batch Engine.

Search order:

  1. Built-in examples: <project_root>/examples/plugins/ - shipped with the package so features work out-of-the-box.

  2. User plugins: ~/.synaptipy/plugins/ - personal or third-party additions.

When the same stem name appears in both directories the user’s copy takes precedence and a warning is logged.

This file is part of Synaptipy, licensed under the GNU Affero General Public License v3.0. See the LICENSE file in the root of the repository for full license details.

class Synaptipy.application.plugin_manager.PluginManager[source]

Bases: object

Manages the discovery, loading, and registration of third-party plugins.

classmethod create_plugin_directory()[source]

Ensures the user plugin directory exists.

classmethod get_plugin_files()[source]

Returns a deduplicated list of plugin .py files from both examples/plugins/ and ~/.synaptipy/plugins/.

The user directory takes precedence: if a file with the same stem exists in both locations, the examples copy is skipped and a warning is emitted so the author knows their local version is active.

classmethod load_plugins()[source]

Dynamically imports all plugins discovered by get_plugin_files().

Plugins from examples/plugins/ are loaded first, then user plugins. A bad plugin (ImportError, SyntaxError, or any other exception) is skipped gracefully so it does not crash the main application.

Loading is skipped entirely when the enable_plugins QSettings key is False (set via Preferences -> Extensions).

classmethod reload_plugins()[source]

Hot-reload plugins without restarting the application.

Purges all plugin-contributed analyses from AnalysisRegistry, then re-loads plugins if the enable_plugins setting is True. Call this after the user toggles the “Enable Custom Plugins” preference, then rebuild the Analyser UI to reflect the change.

Usage Example

from Synaptipy.application.plugin_manager import PluginManager

PluginManager.load_plugins()
PluginManager.reload_plugins()

Exporters

NWB Exporter

Exporter for saving Recording data to the NWB:N 2.0 format. Utilizes metadata extracted by NeoAdapter and stored in data_model objects.

class Synaptipy.infrastructure.exporters.nwb_exporter.NWBHDF5IO(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.NWBFile(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.CurrentClampSeries(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.CurrentClampStimulusSeries(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.IntracellularElectrode(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.PatchClampSeries(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.VoltageClampSeries(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.VoltageClampStimulusSeries(*args, **kwargs)[source]

Bases: object

Sentinel: pynwb not installed.

class Synaptipy.infrastructure.exporters.nwb_exporter.NWBExporter[source]

Bases: object

Handles exporting Recording domain objects to NWB files.

export(recording, output_path, session_metadata, analysis_results=None)[source]

Exports the given Recording object to an NWB file.

Parameters:
  • recording – The Recording object containing data and metadata.

  • output_path – The Path object where the .nwb file will be saved.

  • session_metadata – A dictionary containing user-provided or default metadata required for NWBFile creation. Expected keys: ‘session_description’, ‘identifier’, ‘session_start_time’, plus optional ‘experimenter’, ‘lab’, ‘institution’, ‘session_id’, and Subject/Device/Electrode info.

  • analysis_results – Optional dict (or list of dicts) produced by BatchAnalysisEngine.run_batch. When provided, discrete-event arrays stored under the _raw_arrays key are written into an NWB ProcessingModule as DynamicTable objects. No analysis computation is re-run.

Raises:

ExportError – If any error occurs during the NWB file creation or writing.

Usage Example

from Synaptipy.infrastructure.exporters.nwb_exporter import NWBExporter

exporter = NWBExporter()
metadata = {
    'session_description': 'Recording session',
    'experimenter': 'Researcher Name',
    'lab': 'Lab Name',
    'institution': 'Institution',
    'experiment_description': 'Experiment details',
    'session_id': 'session123'
}
exporter.export(recording, "/path/to/output.nwb", metadata)

Licensing

Synaptipy is released under the GNU Affero General Public License Version 3 (AGPL-3.0). See the LICENSE file for full terms.