API for module: spikeinterface

Class: AnalyzerExtension
  Docstring:
    This the base class to extend the SortingAnalyzer.
    It can handle persistency to disk for any computations related to:
    
    For instance:
      * waveforms
      * principal components
      * spike amplitudes
      * quality metrics
    
    Possible extension can be registered on-the-fly at import time with register_result_extension() mechanism.
    It also enables any custom computation on top of the SortingAnalyzer to be implemented by the user.
    
    An extension needs to inherit from this class and implement some attributes and abstract methods:
    
      * extension_name
      * depend_on
      * need_recording
      * use_nodepipeline
      * nodepipeline_variables only if use_nodepipeline=True
      * need_job_kwargs
      * _set_params()
      * _run()
      * _select_extension_data()
      * _merge_extension_data()
      * _get_data()
    
    The subclass must also set an `extension_name` class attribute which is not None by default.
    
    The subclass must also hanle an attribute `data` which is a dict contain the results after the `run()`.
    
    All AnalyzerExtension will have a function associate for instance (this use the function_factory):
    compute_unit_location(sorting_analyzer, ...) will be equivalent to sorting_analyzer.compute("unit_location", ...)
  __init__(self, sorting_analyzer)
  Method: copy(self, new_sorting_analyzer, unit_ids=None)
    Docstring:
      None
  Method: delete(self)
    Docstring:
      Delete the extension from the folder or zarr and from the dict.
  Method: get_data(self, *args, **kwargs)
    Docstring:
      None
  Method: get_pipeline_nodes(self)
    Docstring:
      None
  Method: load_data(self)
    Docstring:
      None
  Method: load_params(self)
    Docstring:
      None
  Method: load_run_info(self)
    Docstring:
      None
  Method: merge(self, new_sorting_analyzer, merge_unit_groups, new_unit_ids, keep_mask=None, verbose=False, **job_kwargs)
    Docstring:
      None
  Method: reset(self)
    Docstring:
      Reset the extension.
      Delete the sub folder and create a new empty one.
  Method: run(self, save=True, **kwargs)
    Docstring:
      None
  Method: save(self)
    Docstring:
      None
  Method: set_params(self, save=True, **params)
    Docstring:
      Set parameters for the extension and
      make it persistent in json.

Class: AppendSegmentRecording
  Docstring:
    Takes as input a list of parent recordings each with multiple segments and
    returns a single multi-segment recording that "appends" all segments from
    all parent recordings.
    
    For instance, given one recording with 2 segments and one recording with 3 segments,
    this class will give one recording with 5 segments
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, sampling_frequency_max_diff=0)

Class: AppendSegmentSorting
  Docstring:
    Return a sorting that "append" all segments from all sorting
    into one sorting multi segment.
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, sampling_frequency_max_diff=0)

Class: BaseEvent
  Docstring:
    Abstract class representing events.
    
    
    Parameters
    ----------
    channel_ids : list or np.array
        The channel ids
    structured_dtype : dtype or dict
        The dtype of the events. If dict, each key is the channel_id and values must be
        the dtype of the channel (also structured). If dtype, each channel is assigned the
        same dtype.
        In case of structured dtypes, the "time" or "timestamp" field name must be present.
  __init__(self, channel_ids, structured_dtype)
  Method: add_event_segment(self, event_segment)
    Docstring:
      None
  Method: get_dtype(self, channel_id)
    Docstring:
      None
  Method: get_event_times(self, channel_id: 'int | str | None' = None, segment_index: 'int | None' = None, start_time: 'float | None' = None, end_time: 'float | None' = None)
    Docstring:
      Return events timestamps of a channel in seconds.
      
      Parameters
      ----------
      channel_id : int | str | None, default: None
          The event channel id
      segment_index : int | None, default: None
          The segment index, required for multi-segment objects
      start_time : float | None, default: None
          The start time in seconds
      end_time : float | None, default: None
          The end time in seconds
      
      Returns
      -------
      np.array
          1d array of timestamps for the event channel
  Method: get_events(self, channel_id: 'int | str | None' = None, segment_index: 'int | None' = None, start_time: 'float | None' = None, end_time: 'float | None' = None)
    Docstring:
      Return events of a channel in its native structured type.
      
      Parameters
      ----------
      channel_id : int | str | None, default: None
          The event channel id
      segment_index : int | None, default: None
          The segment index, required for multi-segment objects
      start_time : float | None, default: None
          The start time in seconds
      end_time : float | None, default: None
          The end time in seconds
      
      Returns
      -------
      np.array
          Structured np.array of dtype `get_dtype(channel_id)`
  Method: get_num_channels(self)
    Docstring:
      None
  Method: get_num_segments(self)
    Docstring:
      None

Class: BaseEventSegment
  Docstring:
    Abstract class representing several units and relative spiketrain inside a segment.
  __init__(self)
  Method: get_event_times(self, channel_id: 'int | str', start_time: 'float', end_time: 'float') -> 'np.ndarray'
    Docstring:
      Returns event timestamps of a channel in seconds
      Parameters
      ----------
      channel_id : int | str
          The event channel id
      start_time : float
          The start time in seconds
      end_time : float
          The end time in seconds
      
      Returns
      -------
      np.array
          1d array of timestamps for the event channel
  Method: get_events(self, channel_id, start_time, end_time)
    Docstring:
      None

Class: BaseRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, sampling_frequency: 'float', channel_ids: 'list', dtype)
  Method: add_recording_segment(self, recording_segment)
    Docstring:
      Adds a recording segment.
      
      Parameters
      ----------
      recording_segment : BaseRecordingSegment
          The recording segment to add
  Method: astype(self, dtype, round: 'bool | None' = None)
    Docstring:
      None
  Method: binary_compatible_with(self, dtype=None, time_axis=None, file_paths_length=None, file_offset=None, file_suffix=None, file_paths_lenght=None)
    Docstring:
      Check is the recording is binary compatible with some constrain on
      
        * dtype
        * tim_axis
        * len(file_paths)
        * file_offset
        * file_suffix
  Method: frame_slice(self, start_frame: 'int | None', end_frame: 'int | None') -> 'BaseRecording'
    Docstring:
      Returns a new recording with sliced frames. Note that this operation is not in place.
      
      Parameters
      ----------
      start_frame : int, optional
          The start frame, if not provided it is set to 0
      end_frame : int, optional
          The end frame, it not provided it is set to the total number of samples
      
      Returns
      -------
      BaseRecording
          A new recording object with only samples between start_frame and end_frame
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: get_channel_locations(self, channel_ids: 'list | np.ndarray | tuple | None' = None, axes: "'xy' | 'yz' | 'xz' | 'xyz'" = 'xy') -> 'np.ndarray'
    Docstring:
      Get the physical locations of specified channels.
      
      Parameters
      ----------
      channel_ids : array-like, optional
          The IDs of the channels for which to retrieve locations. If None, retrieves locations
          for all available channels. Default is None.
      axes : "xy" | "yz" | "xz" | "xyz", default: "xy"
          The spatial axes to return, specified as a string (e.g., "xy", "xyz"). Default is "xy".
      
      Returns
      -------
      np.ndarray
          A 2D or 3D array of shape (n_channels, n_dimensions) containing the locations of the channels.
          The number of dimensions depends on the `axes` argument (e.g., 2 for "xy", 3 for "xyz").
  Method: get_duration(self, segment_index=None) -> 'float'
    Docstring:
      Returns the duration in seconds.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The sample index to retrieve the duration for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the duration of the single segment
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_end_time(self, segment_index=None) -> 'float'
    Docstring:
      Get the stop time of the recording segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      float
          The stop time in seconds
  Method: get_memory_size(self, segment_index=None) -> 'int'
    Docstring:
      Returns the memory size of segment_index in bytes.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The index of the segment for which the memory size should be calculated.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the memory size of the single segment
      
      Returns
      -------
      int
          The memory size of the specified segment in bytes.
  Method: get_num_frames(self, segment_index: 'int | None' = None) -> 'int'
    Docstring:
      Returns the number of samples for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the number of samples in the segment
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_samples(self, segment_index: 'int | None' = None) -> 'int'
    Docstring:
      Returns the number of samples for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the number of samples in the segment
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_segments(self) -> 'int'
    Docstring:
      Returns the number of segments.
      
      Returns
      -------
      int
          Number of segments in the recording
  Method: get_start_time(self, segment_index=None) -> 'float'
    Docstring:
      Get the start time of the recording segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      float
          The start time in seconds
  Method: get_time_info(self, segment_index=None) -> 'dict'
    Docstring:
      Retrieves the timing attributes for a given segment index. As with
      other recorders this method only needs a segment index in the case
      of multi-segment recordings.
      
      Returns
      -------
      dict
          A dictionary containing the following key-value pairs:
      
          - "sampling_frequency" : The sampling frequency of the RecordingSegment.
          - "t_start" : The start time of the RecordingSegment.
          - "time_vector" : The time vector of the RecordingSegment.
      
      Notes
      -----
      The keys are always present, but the values may be None.
  Method: get_times(self, segment_index=None) -> 'np.ndarray'
    Docstring:
      Get time vector for a recording segment.
      
      If the segment has a time_vector, then it is returned. Otherwise
      a time_vector is constructed on the fly with sampling frequency.
      If t_start is defined and the time vector is constructed on the fly,
      the first time will be t_start. Otherwise it will start from 0.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      np.array
          The 1d times array
  Method: get_total_duration(self) -> 'float'
    Docstring:
      Returns the total duration in seconds
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_total_memory_size(self) -> 'int'
    Docstring:
      Returns the sum in bytes of all the memory sizes of the segments.
      
      Returns
      -------
      int
          The total memory size in bytes for all segments.
  Method: get_total_samples(self) -> 'int'
    Docstring:
      Returns the sum of the number of samples in each segment.
      
      Returns
      -------
      int
          The total number of samples
  Method: get_traces(self, segment_index: 'int | None' = None, start_frame: 'int | None' = None, end_frame: 'int | None' = None, channel_ids: 'list | np.array | tuple | None' = None, order: "'C' | 'F' | None" = None, return_scaled: 'bool' = False, cast_unsigned: 'bool' = False) -> 'np.ndarray'
    Docstring:
      Returns traces from recording.
      
      Parameters
      ----------
      segment_index : int | None, default: None
          The segment index to get traces from. If recording is multi-segment, it is required, default: None
      start_frame : int | None, default: None
          The start frame. If None, 0 is used, default: None
      end_frame : int | None, default: None
          The end frame. If None, the number of samples in the segment is used, default: None
      channel_ids : list | np.array | tuple | None, default: None
          The channel ids. If None, all channels are used, default: None
      order : "C" | "F" | None, default: None
          The order of the traces ("C" | "F"). If None, traces are returned as they are
      return_scaled : bool, default: False
          If True and the recording has scaling (gain_to_uV and offset_to_uV properties),
          traces are scaled to uV
      cast_unsigned : bool, default: False
          If True and the traces are unsigned, they are cast to integer and centered
          (an offset of (2**nbits) is subtracted)
      
      Returns
      -------
      np.array
          The traces (num_samples, num_channels)
      
      Raises
      ------
      ValueError
          If return_scaled is True, but recording does not have scaled traces
  Method: has_scaled_traces(self) -> 'bool'
    Docstring:
      Checks if the recording has scaled traces
      
      Returns
      -------
      bool
          True if the recording has scaled traces, False otherwise
  Method: has_time_vector(self, segment_index=None)
    Docstring:
      Check if the segment of the recording has a time vector.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      bool
          True if the recording has time vectors, False otherwise
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: rename_channels(self, new_channel_ids: 'list | np.array | tuple') -> "'BaseRecording'"
    Docstring:
      Returns a new recording object with renamed channel ids.
      
      Note that this method does not modify the current recording and instead returns a new recording object.
      
      Parameters
      ----------
      new_channel_ids : list or np.array or tuple
          The new channel ids. They are mapped positionally to the old channel ids.
  Method: reset_times(self)
    Docstring:
      Reset time information in-memory for all segments that have a time vector.
      If the timestamps come from a file, the files won't be modified. but only the in-memory
      attributes of the recording objects are deleted. Also `t_start` is set to None and the
      segment's sampling frequency is set to the recording's sampling frequency.
  Method: sample_index_to_time(self, sample_ind, segment_index=None)
    Docstring:
      Transform sample index into time in seconds
  Method: select_channels(self, channel_ids: 'list | np.array | tuple') -> "'BaseRecording'"
    Docstring:
      Returns a new recording object with a subset of channels.
      
      Note that this method does not modify the current recording and instead returns a new recording object.
      
      Parameters
      ----------
      channel_ids : list or np.array or tuple
          The channel ids to select.
  Method: set_times(self, times, segment_index=None, with_warning=True)
    Docstring:
      Set times for a recording segment.
      
      Parameters
      ----------
      times : 1d np.array
          The time vector
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      with_warning : bool, default: True
          If True, a warning is printed
  Method: shift_times(self, shift: 'int | float', segment_index: 'int | None' = None) -> 'None'
    Docstring:
      Shift all times by a scalar value.
      
      Parameters
      ----------
      shift : int | float
          The shift to apply. If positive, times will be increased by `shift`.
          e.g. shifting by 1 will be like the recording started 1 second later.
          If negative, the start time will be decreased i.e. as if the recording
          started earlier.
      
      segment_index : int | None
          The segment on which to shift the times.
          If `None`, all segments will be shifted.
  Method: time_slice(self, start_time: 'float | None', end_time: 'float | None') -> 'BaseRecording'
    Docstring:
      Returns a new recording object, restricted to the time interval [start_time, end_time].
      
      Parameters
      ----------
      start_time : float, optional
          The start time in seconds. If not provided it is set to 0.
      end_time : float, optional
          The end time in seconds. If not provided it is set to the total duration.
      
      Returns
      -------
      BaseRecording
          A new recording object with only samples between start_time and end_time
  Method: time_to_sample_index(self, time_s, segment_index=None)
    Docstring:
      None

Class: BaseRecordingSegment
  Docstring:
    Abstract class representing a multichannel timeseries, or block of raw ephys traces
  __init__(self, sampling_frequency=None, t_start=None, time_vector=None)
  Method: get_end_time(self) -> 'float'
    Docstring:
      None
  Method: get_num_samples(self) -> 'int'
    Docstring:
      Returns the number of samples in this signal segment
      
      Returns:
          SampleIndex : Number of samples in the signal segment
  Method: get_start_time(self) -> 'float'
    Docstring:
      None
  Method: get_times(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_times_kwargs(self) -> 'dict'
    Docstring:
      Retrieves the timing attributes characterizing a RecordingSegment
      
      Returns
      -------
      dict
          A dictionary containing the following key-value pairs:
      
          - "sampling_frequency" : The sampling frequency of the RecordingSegment.
          - "t_start" : The start time of the RecordingSegment.
          - "time_vector" : The time vector of the RecordingSegment.
      
      Notes
      -----
      The keys are always present, but the values may be None.
  Method: get_traces(self, start_frame: 'int | None' = None, end_frame: 'int | None' = None, channel_indices: 'list | np.array | tuple | None' = None) -> 'np.ndarray'
    Docstring:
      Return the raw traces, optionally for a subset of samples and/or channels
      
      Parameters
      ----------
      start_frame : int | None, default: None
          start sample index, or zero if None
      end_frame : int | None, default: None
          end_sample, or number of samples if None
      channel_indices : list | np.array | tuple | None, default: None
          Indices of channels to return, or all channels if None
      
      Returns
      -------
      traces : np.ndarray
          Array of traces, num_samples x num_channels
  Method: sample_index_to_time(self, sample_ind)
    Docstring:
      Transform sample index into time in seconds
  Method: time_to_sample_index(self, time_s)
    Docstring:
      Transform time in seconds into sample index

Class: BaseRecordingSnippets
  Docstring:
    Mixin that handles all probe and channel operations
  __init__(self, sampling_frequency: 'float', channel_ids: 'list[str, int]', dtype: 'np.dtype')
  Method: channel_slice(self, channel_ids, renamed_channel_ids=None)
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      renamed_channel_ids : np.array or list, default: None
          A list of renamed channels
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels
  Method: clear_channel_groups(self, channel_ids=None)
    Docstring:
      None
  Method: clear_channel_locations(self, channel_ids=None)
    Docstring:
      None
  Method: create_dummy_probe_from_locations(self, locations, shape='circle', shape_params={'radius': 1}, axes='xy')
    Docstring:
      Creates a "dummy" probe based on locations.
      
      Parameters
      ----------
      locations : np.array
          Array with channel locations (num_channels, ndim) [ndim can be 2 or 3]
      shape : str, default: "circle"
          Electrode shapes
      shape_params : dict, default: {"radius": 1}
          Shape parameters
      axes : str, default: "xy"
          If ndim is 3, indicates the axes that define the plane of the electrodes
      
      Returns
      -------
      probe : Probe
          The created probe
  Method: frame_slice(self, start_frame, end_frame)
    Docstring:
      Returns a new object with sliced frames.
      
      Parameters
      ----------
      start_frame : int
          The start frame
      end_frame : int
          The end frame
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced frames
  Method: get_channel_gains(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_groups(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_ids(self)
    Docstring:
      None
  Method: get_channel_locations(self, channel_ids=None, axes: 'str' = 'xy') -> 'np.ndarray'
    Docstring:
      None
  Method: get_channel_offsets(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_property(self, channel_id, key)
    Docstring:
      None
  Method: get_dtype(self)
    Docstring:
      None
  Method: get_num_channels(self)
    Docstring:
      None
  Method: get_probe(self)
    Docstring:
      None
  Method: get_probegroup(self)
    Docstring:
      None
  Method: get_probes(self)
    Docstring:
      None
  Method: get_sampling_frequency(self)
    Docstring:
      None
  Method: has_3d_locations(self) -> 'bool'
    Docstring:
      None
  Method: has_channel_location(self) -> 'bool'
    Docstring:
      None
  Method: has_probe(self) -> 'bool'
    Docstring:
      None
  Method: has_scaleable_traces(self) -> 'bool'
    Docstring:
      None
  Method: has_scaled(self)
    Docstring:
      None
  Method: is_filtered(self)
    Docstring:
      None
  Method: planarize(self, axes: 'str' = 'xy')
    Docstring:
      Returns a Recording with a 2D probe from one with a 3D probe
      
      Parameters
      ----------
      axes : "xy" | "yz" |"xz", default: "xy"
          The axes to keep
      
      Returns
      -------
      BaseRecording
          The recording with 2D positions
  Method: remove_channels(self, remove_channel_ids)
    Docstring:
      Returns a new object with removed channels.
      
      
      Parameters
      ----------
      remove_channel_ids : np.array or list
          The list of channels to remove
      
      Returns
      -------
      BaseRecordingSnippets
          The object with removed channels
  Method: select_channels(self, channel_ids)
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels
  Method: select_segments(self, segment_indices)
    Docstring:
      Return a new object with the segments specified by "segment_indices".
      
      Parameters
      ----------
      segment_indices : list of int
          List of segment indices to keep in the returned recording
      
      Returns
      -------
      BaseRecordingSnippets
          The onject with the selected segments
  Method: set_channel_gains(self, gains, channel_ids=None)
    Docstring:
      None
  Method: set_channel_groups(self, groups, channel_ids=None)
    Docstring:
      None
  Method: set_channel_locations(self, locations, channel_ids=None)
    Docstring:
      None
  Method: set_channel_offsets(self, offsets, channel_ids=None)
    Docstring:
      None
  Method: set_dummy_probe_from_locations(self, locations, shape='circle', shape_params={'radius': 1}, axes='xy')
    Docstring:
      Sets a "dummy" probe based on locations.
      
      Parameters
      ----------
      locations : np.array
          Array with channel locations (num_channels, ndim) [ndim can be 2 or 3]
      shape : str, default: default: "circle"
          Electrode shapes
      shape_params : dict, default: {"radius": 1}
          Shape parameters
      axes : "xy" | "yz" | "xz", default: "xy"
          If ndim is 3, indicates the axes that define the plane of the electrodes
  Method: set_probe(self, probe, group_mode='by_probe', in_place=False)
    Docstring:
      Attach a list of Probe object to a recording.
      
      Parameters
      ----------
      probe_or_probegroup: Probe, list of Probe, or ProbeGroup
          The probe(s) to be attached to the recording
      group_mode: "by_probe" | "by_shank", default: "by_probe
          "by_probe" or "by_shank". Adds grouping property to the recording based on the probes ("by_probe")
          or  shanks ("by_shanks")
      in_place: bool
          False by default.
          Useful internally when extractor do self.set_probegroup(probe)
      
      Returns
      -------
      sub_recording: BaseRecording
          A view of the recording (ChannelSlice or clone or itself)
  Method: set_probegroup(self, probegroup, group_mode='by_probe', in_place=False)
    Docstring:
      None
  Method: set_probes(self, probe_or_probegroup, group_mode='by_probe', in_place=False)
    Docstring:
      None
  Method: split_by(self, property='group', outputs='dict')
    Docstring:
      Splits object based on a certain property (e.g. "group")
      
      Parameters
      ----------
      property : str, default: "group"
          The property to use to split the object, default: "group"
      outputs : "dict" | "list", default: "dict"
          Whether to return a dict or a list
      
      Returns
      -------
      dict or list
          A dict or list with grouped objects based on property
      
      Raises
      ------
      ValueError
          Raised when property is not present

Class: BaseSnippets
  Docstring:
    Abstract class representing several multichannel snippets.
  __init__(self, sampling_frequency: 'float', nbefore: 'Union[int, None]', snippet_len: 'int', channel_ids: 'list', dtype)
  Method: add_snippets_segment(self, snippets_segment)
    Docstring:
      None
  Method: get_frames(self, indices=None, segment_index: 'Union[int, None]' = None)
    Docstring:
      None
  Method: get_num_segments(self)
    Docstring:
      None
  Method: get_num_snippets(self, segment_index=None)
    Docstring:
      None
  Method: get_snippets(self, indices=None, segment_index: 'Union[int, None]' = None, channel_ids: 'Union[list, None]' = None, return_scaled=False)
    Docstring:
      None
  Method: get_snippets_from_frames(self, segment_index: 'Union[int, None]' = None, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None, channel_ids: 'Union[list, None]' = None, return_scaled=False)
    Docstring:
      None
  Method: get_times(self)
    Docstring:
      None
  Method: get_total_snippets(self)
    Docstring:
      None
  Method: has_scaled_snippets(self)
    Docstring:
      None
  Method: is_aligned(self)
    Docstring:
      None
  Method: select_channels(self, channel_ids: 'list | np.array | tuple') -> "'BaseSnippets'"
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels

Class: BaseSnippetsSegment
  Docstring:
    Abstract class representing multichannel snippets
  __init__(self)
  Method: frames_to_indices(self, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None)
    Docstring:
      Return the slice of snippets
      
      Parameters
      ----------
      start_frame : Union[int, None], default: None
          start sample index, or zero if None
      end_frame : Union[int, None], default: None
          end_sample, or number of samples if None
      
      Returns
      -------
      snippets : slice
          slice of selected snippets
  Method: get_frames(self, indices)
    Docstring:
      Returns the frames of the snippets in this  segment
      
      Returns:
          SampleIndex : Number of samples in the  segment
  Method: get_num_snippets(self)
    Docstring:
      Returns the number of snippets in this segment
      
      Returns:
          SampleIndex : Number of snippets in the segment
  Method: get_snippets(self, indices, channel_indices: 'Union[list, None]' = None) -> 'np.ndarray'
    Docstring:
      Return the snippets, optionally for a subset of samples and/or channels
      
      Parameters
      ----------
      indices : list[int]
          Indices of the snippets to return
      channel_indices : Union[list, None], default: None
          Indices of channels to return, or all channels if None
      
      Returns
      -------
      snippets : np.ndarray
          Array of snippets, num_snippets x num_samples x num_channels

Class: BaseSorting
  Docstring:
    Abstract class representing several segment several units and relative spiketrains.
  __init__(self, sampling_frequency: 'float', unit_ids: 'list')
  Method: add_sorting_segment(self, sorting_segment)
    Docstring:
      None
  Method: count_num_spikes_per_unit(self, outputs='dict')
    Docstring:
      For each unit : get number of spikes  across segments.
      
      Parameters
      ----------
      outputs : "dict" | "array", default: "dict"
          Control the type of the returned object : a dict (keys are unit_ids) or an numpy array.
      
      Returns
      -------
      dict or numpy.array
          Dict : Dictionary with unit_ids as key and number of spikes as values
          Numpy array : array of size len(unit_ids) in the same order as unit_ids.
  Method: count_total_num_spikes(self) -> 'int'
    Docstring:
      Get total number of spikes in the sorting.
      
      This is the sum of all spikes in all segments across all units.
      
      Returns
      -------
      total_num_spikes : int
          The total number of spike
  Method: frame_slice(self, start_frame, end_frame, check_spike_frames=True)
    Docstring:
      None
  Method: get_empty_unit_ids(self) -> 'np.ndarray'
    Docstring:
      Return the unit IDs that have zero spikes across all segments.
      
      This method returns the complement of `get_non_empty_unit_ids` with respect
      to all unit IDs in the sorting.
      
      Returns
      -------
      np.ndarray
          Array of unit IDs (same dtype as self.unit_ids) for which no spikes exist.
  Method: get_non_empty_unit_ids(self) -> 'np.ndarray'
    Docstring:
      Return the unit IDs that have at least one spike across all segments.
      
      This method computes the number of spikes for each unit using
      `count_num_spikes_per_unit` and filters out units with zero spikes.
      
      Returns
      -------
      np.ndarray
          Array of unit IDs (same dtype as self.unit_ids) for which at least one spike exists.
  Method: get_num_samples(self, segment_index=None) -> 'int'
    Docstring:
      Returns the number of samples of the associated recording for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_segments(self) -> 'int'
    Docstring:
      None
  Method: get_num_units(self) -> 'int'
    Docstring:
      None
  Method: get_sampling_frequency(self) -> 'float'
    Docstring:
      None
  Method: get_times(self, segment_index=None)
    Docstring:
      Get time vector for a registered recording segment.
      
      If a recording is registered:
          * if the segment has a time_vector, then it is returned
          * if not, a time_vector is constructed on the fly with sampling frequency
      
      If there is no registered recording it returns None
  Method: get_total_duration(self) -> 'float'
    Docstring:
      Returns the total duration in s of the associated recording.
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_total_samples(self) -> 'int'
    Docstring:
      Returns the total number of samples of the associated recording.
      
      Returns
      -------
      int
          The total number of samples
  Method: get_unit_ids(self) -> 'list'
    Docstring:
      None
  Method: get_unit_property(self, unit_id, key)
    Docstring:
      None
  Method: get_unit_spike_train(self, unit_id: 'str | int', segment_index: 'Union[int, None]' = None, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None, return_times: 'bool' = False, use_cache: 'bool' = True)
    Docstring:
      None
  Method: has_recording(self) -> 'bool'
    Docstring:
      None
  Method: has_time_vector(self, segment_index=None) -> 'bool'
    Docstring:
      Check if the segment of the registered recording has a time vector.
  Method: precompute_spike_trains(self, from_spike_vector=None)
    Docstring:
      Pre-computes and caches all spike trains for this sorting
      
      Parameters
      ----------
      from_spike_vector : None | bool, default: None
          If None, then it is automatic depending on whether the spike vector is cached.
          If True, will compute it from the spike vector.
          If False, will call `get_unit_spike_train` for each segment for each unit.
  Method: register_recording(self, recording, check_spike_frames=True)
    Docstring:
      Register a recording to the sorting. If the sorting and recording both contain
      time information, the recording’s time information will be used.
      
      Parameters
      ----------
      recording : BaseRecording
          Recording with the same number of segments as current sorting.
          Assigned to self._recording.
      check_spike_frames : bool, default: True
          If True, assert for each segment that all spikes are within the recording's range.
  Method: remove_empty_units(self)
    Docstring:
      Returns a new sorting object which contains only units with at least one spike.
      For multi-segments, a unit is considered empty if it contains no spikes in all segments.
      
      Returns
      -------
      BaseSorting
          Sorting object with non-empty units
  Method: remove_units(self, remove_unit_ids) -> 'BaseSorting'
    Docstring:
      Returns a new sorting object with contains only a selected subset of units.
      
      Parameters
      ----------
      remove_unit_ids :  numpy.array or list
          List of unit ids to remove
      
      Returns
      -------
      BaseSorting
          Sorting without the removed units
  Method: rename_units(self, new_unit_ids: 'np.ndarray | list') -> 'BaseSorting'
    Docstring:
      Returns a new sorting object with renamed units.
      
      
      Parameters
      ----------
      new_unit_ids : numpy.array or list
          List of new names for unit ids.
          They should map positionally to the existing unit ids.
      
      Returns
      -------
      BaseSorting
          Sorting object with renamed units
  Method: select_units(self, unit_ids, renamed_unit_ids=None) -> 'BaseSorting'
    Docstring:
      Returns a new sorting object which contains only a selected subset of units.
      
      
      Parameters
      ----------
      unit_ids : numpy.array or list
          List of unit ids to keep
      renamed_unit_ids : numpy.array or list, default: None
          If given, the kept unit ids are renamed
      
      Returns
      -------
      BaseSorting
          Sorting object with selected units
  Method: set_sorting_info(self, recording_dict, params_dict, log_dict)
    Docstring:
      None
  Method: time_slice(self, start_time: 'float | None', end_time: 'float | None') -> 'BaseSorting'
    Docstring:
      Returns a new sorting object, restricted to the time interval [start_time, end_time].
      
      Parameters
      ----------
      start_time : float | None, default: None
          The start time in seconds. If not provided it is set to 0.
      end_time : float | None, default: None
          The end time in seconds. If not provided it is set to the total duration.
      
      Returns
      -------
      BaseSorting
          A new sorting object with only samples between start_time and end_time
  Method: time_to_sample_index(self, time, segment_index=0)
    Docstring:
      Transform time in seconds into sample index
  Method: to_multiprocessing(self, n_jobs)
    Docstring:
      When necessary turn sorting object into:
      * NumpySorting when n_jobs=1
      * SharedMemorySorting when n_jobs>1
      
      If the sorting is already NumpySorting, SharedMemorySorting or NumpyFolderSorting
      then this return the sortign itself, no transformation so.
      
      Parameters
      ----------
      n_jobs : int
          The number of jobs.
      Returns
      -------
      sharable_sorting:
          A sorting that can be used for multiprocessing.
  Method: to_numpy_sorting(self, propagate_cache=True)
    Docstring:
      Turn any sorting in a NumpySorting.
      useful to have it in memory with a unique vector representation.
      
      Parameters
      ----------
      propagate_cache : bool
          Propagate the cache of indivudual spike trains.
  Method: to_shared_memory_sorting(self)
    Docstring:
      Turn any sorting in a SharedMemorySorting.
      Usefull to have it in memory with a unique vector representation and sharable across processes.
  Method: to_spike_vector(self, concatenated=True, extremum_channel_inds=None, use_cache=True) -> 'np.ndarray | list[np.ndarray]'
    Docstring:
      Construct a unique structured numpy vector concatenating all spikes
      with several fields: sample_index, unit_index, segment_index.
      
      
      Parameters
      ----------
      concatenated : bool, default: True
          With concatenated=True the output is one numpy "spike vector" with spikes from all segments.
          With concatenated=False the output is a list "spike vector" by segment.
      extremum_channel_inds : None or dict, default: None
          If a dictionnary of unit_id to channel_ind is given then an extra field "channel_index".
          This can be convinient for computing spikes postion after sorter.
          This dict can be computed with `get_template_extremum_channel(we, outputs="index")`
      use_cache : bool, default: True
          When True the spikes vector is cached as an attribute of the object (`_cached_spike_vector`).
          This caching only occurs when extremum_channel_inds=None.
      
      Returns
      -------
      spikes : np.array
          Structured numpy array ("sample_index", "unit_index", "segment_index") with all spikes
          Or ("sample_index", "unit_index", "segment_index", "channel_index") if extremum_channel_inds
          is given

Class: BaseSortingSegment
  Docstring:
    Abstract class representing several units and relative spiketrain inside a segment.
  __init__(self, t_start=None)
  Method: get_unit_spike_train(self, unit_id, start_frame: 'Optional[int]' = None, end_frame: 'Optional[int]' = None) -> 'np.ndarray'
    Docstring:
      Get the spike train for a unit.
      
      Parameters
      ----------
      unit_id
      start_frame : int, default: None
      end_frame : int, default: None
      
      Returns
      -------
      np.ndarray

Class: BinaryFolderRecording
  Docstring:
    BinaryFolderRecording is an internal format used in spikeinterface.
    It is a BinaryRecordingExtractor + metadata contained in a folder.
    
    It is created with the function: `recording.save(format="binary", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    recording : BinaryFolderRecording
        The recording
  __init__(self, folder_path)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary

Class: BinaryRecordingExtractor
  Docstring:
    RecordingExtractor for a binary format
    
    Parameters
    ----------
    file_paths : str or Path or list
        Path to the binary file
    sampling_frequency : float
        The sampling frequency
    num_channels : int
        Number of channels
    dtype : str or dtype
        The dtype of the binary file
    time_axis : int, default: 0
        The axis of the time dimension
    t_starts : None or list of float, default: None
        Times in seconds of the first sample for each segment
    channel_ids : list, default: None
        A list of channel ids
    file_offset : int, default: 0
        Number of bytes in the file to offset by during memmap instantiation.
    gain_to_uV : float or array-like, default: None
        The gain to apply to the traces
    offset_to_uV : float or array-like, default: None
        The offset to apply to the traces
    is_filtered : bool or None, default: None
        If True, the recording is assumed to be filtered. If None, is_filtered is not set.
    
    Notes
    -----
    When both num_channels and num_chan are provided, `num_channels` is used and `num_chan` is ignored.
    
    Returns
    -------
    recording : BinaryRecordingExtractor
        The recording Extractor
  __init__(self, file_paths, sampling_frequency, dtype, num_channels: 'int', t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: write_recording(recording, file_paths, dtype=None, **job_kwargs)
    Docstring:
      Save the traces of a recording extractor in binary .dat format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor object to be saved in .dat format
      file_paths : str
          The path to the file.
      dtype : dtype, default: None
          Type of the saved data
      **job_kwargs : keyword arguments for parallel processing:
          * chunk_duration or chunk_size or chunk_memory or total_memory
              - chunk_size : int
                  Number of samples per chunk
              - chunk_memory : str
                  Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
              - total_memory : str
                  Total memory usage (e.g. "500M", "2G")
              - chunk_duration : str or float or None
                  Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
          * n_jobs : int | float
              Number of jobs to use. With -1 the number of jobs is the same as number of cores.
              Using a float between 0 and 1 will use that fraction of the total cores.
          * progress_bar : bool
              If True, a progress bar is printed
          * mp_context : "fork" | "spawn" | None, default: None
              Context for multiprocessing. It can be None, "fork" or "spawn".
              Note that "fork" is only safely available on LINUX systems

Class: ChannelSliceRecording
  Docstring:
    Class to slice a Recording object based on channel_ids.
    
    Not intending to be used directly, use methods of `BaseRecording` such as `recording.select_channels`.
  __init__(self, parent_recording, channel_ids=None, renamed_channel_ids=None)

Class: ChannelSliceSnippets
  Docstring:
    Class to slice a Snippets object based on channel_ids.
    
    Do not use this class directly but use `snippets.channel_slice(...)`
  __init__(self, parent_snippets, channel_ids=None, renamed_channel_ids=None)

Class: ChannelSparsity
  Docstring:
    Handle channel sparsity for a set of units. That is, for every unit,
    it indicates which channels are used to represent the waveform and the rest
    of the non-represented channels are assumed to be zero.
    
    Internally, sparsity is stored as a boolean mask.
    
    The ChannelSparsity object can also provide other sparsity representations:
    
        * ChannelSparsity.unit_id_to_channel_ids : unit_id to channel_ids
        * ChannelSparsity.unit_id_to_channel_indices : unit_id channel_inds
    
    By default it is constructed with a boolean array:
    
    >>> sparsity = ChannelSparsity(mask, unit_ids, channel_ids)
    
    But can be also constructed from a dictionary:
    
    >>> sparsity = ChannelSparsity.from_unit_id_to_channel_ids(unit_id_to_channel_ids, unit_ids, channel_ids)
    
    Parameters
    ----------
    mask : np.array of bool
        The sparsity mask (num_units, num_channels)
    unit_ids : list or array
        Unit ids vector or list
    channel_ids : list or array
        Channel ids vector or list
    
    Examples
    --------
    
    The class can also be used to construct/estimate the sparsity from a SortingAnalyzer or a Templates
    with several methods:
    
    Using the N best channels (largest template amplitude):
    
    >>> sparsity = ChannelSparsity.from_best_channels(sorting_analyzer, num_channels, peak_sign="neg")
    
    Using a neighborhood by radius:
    
    >>> sparsity = ChannelSparsity.from_radius(sorting_analyzer, radius_um, peak_sign="neg")
    
    Using a SNR threshold:
    >>> sparsity = ChannelSparsity.from_snr(sorting_analyzer, threshold, peak_sign="neg")
    
    Using a template energy threshold:
    >>> sparsity = ChannelSparsity.from_energy(sorting_analyzer, threshold)
    
    Using a recording/sorting property (e.g. "group"):
    
    >>> sparsity = ChannelSparsity.from_property(sorting_analyzer, by_property="group")
  __init__(self, mask, unit_ids, channel_ids)
  Method: are_waveforms_dense(self, waveforms: 'np.ndarray') -> 'bool'
    Docstring:
      None
  Method: are_waveforms_sparse(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'bool'
    Docstring:
      None
  Method: densify_waveforms(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'np.ndarray'
    Docstring:
      Densify sparse waveforms that were sparisified according to a unit's channel sparsity.
      
      Given a unit_id its sparsified waveform, this method places the waveform back
      into its original form within a dense array.
      
      Parameters
      ----------
      waveforms : np.array
          The sparsified waveforms array of shape (num_waveforms, num_samples, num_active_channels) or a single
          sparse waveform (template) with shape (num_samples, num_active_channels).
      unit_id : str
          The unit_id that was used to sparsify the waveform.
      
      Returns
      -------
      densified_waveforms : np.array
          The densified waveforms array of shape (num_waveforms, num_samples, num_channels) or a single dense
          waveform (template) with shape (num_samples, num_channels).
  Method: sparsify_templates(self, templates_array: 'np.ndarray') -> 'np.ndarray'
    Docstring:
      None
  Method: sparsify_waveforms(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'np.ndarray'
    Docstring:
      Sparsify the waveforms according to a unit_id corresponding sparsity.
      
      
      Given a unit_id, this method selects only the active channels for
      that unit and removes the rest.
      
      Parameters
      ----------
      waveforms : np.array
          Dense waveforms with shape (num_waveforms, num_samples, num_channels) or a
          single dense waveform (template) with shape (num_samples, num_channels).
      unit_id : str
          The unit_id for which to sparsify the waveform.
      
      Returns
      -------
      sparsified_waveforms : np.array
          Sparse waveforms with shape (num_waveforms, num_samples, num_active_channels)
          or a single sparsified waveform (template) with shape (num_samples, num_active_channels).
  Method: to_dict(self)
    Docstring:
      Return a serializable dict.

Class: ChannelsAggregationRecording
  Docstring:
    Class that handles aggregating channels from different recordings, e.g. from different channel groups.
    
    Do not use this class directly but use `si.aggregate_channels(...)`
  __init__(self, recording_list_or_dict=None, renamed_channel_ids=None, recording_list=None)

Class: ChunkRecordingExecutor
  Docstring:
    Core class for parallel processing to run a "function" over chunks on a recording.
    
    It supports running a function:
        * in loop with chunk processing (low RAM usage)
        * at once if chunk_size is None (high RAM usage)
        * in parallel with ProcessPoolExecutor (higher speed)
    
    The initializer ("init_func") allows to set a global context to avoid heavy serialization
    (for examples, see implementation in `core.waveform_tools`).
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording to be processed
    func : function
        Function that runs on each chunk
    init_func : function
        Initializer function to set the global context (accessible by "func")
    init_args : tuple
        Arguments for init_func
    verbose : bool
        If True, output is verbose
    job_name : str, default: ""
        Job name
    progress_bar : bool, default: False
        If True, a progress bar is printed to monitor the progress of the process
    handle_returns : bool, default: False
        If True, the function can return values
    gather_func : None or callable, default: None
        Optional function that is called in the main thread and retrieves the results of each worker.
        This function can be used instead of `handle_returns` to implement custom storage on-the-fly.
    pool_engine : "process" | "thread", default: "thread"
        If n_jobs>1 then use ProcessPoolExecutor or ThreadPoolExecutor
    n_jobs : int, default: 1
        Number of jobs to be used. Use -1 to use as many jobs as number of cores
    total_memory : str, default: None
        Total memory (RAM) to use (e.g. "1G", "500M")
    chunk_memory : str, default: None
        Memory per chunk (RAM) to use (e.g. "1G", "500M")
    chunk_size : int or None, default: None
        Size of each chunk in number of samples. If "total_memory" or "chunk_memory" are used, it is ignored.
    chunk_duration : str or float or None
        Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
    mp_context : "fork" | "spawn" | None, default: None
        "fork" or "spawn". If None, the context is taken by the recording.get_preferred_mp_context().
        "fork" is only safely available on LINUX systems.
    max_threads_per_worker : int or None, default: None
        Limit the number of thread per process using threadpoolctl modules.
        This used only when n_jobs>1
        If None, no limits.
    need_worker_index : bool, default False
        If True then each worker will also have a "worker_index" injected in the local worker dict.
    
    Returns
    -------
    res : list
        If "handle_returns" is True, the results for each chunk process
  __init__(self, recording, func, init_func, init_args, verbose=False, progress_bar=False, handle_returns=False, gather_func=None, pool_engine='thread', n_jobs=1, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, mp_context=None, job_name='', max_threads_per_worker=1, need_worker_index=False)
  Method: run(self, recording_slices=None)
    Docstring:
      Runs the defined jobs.

Class: ComputeNoiseLevels
  Docstring:
    Computes the noise level associated with each recording channel.
    
    This function will wraps the `get_noise_levels(recording)` to make the noise levels persistent
    on disk (folder or zarr) as a `WaveformExtension`.
    The noise levels do not depend on the unit list, only the recording, but it is a convenient way to
    retrieve the noise levels directly ine the WaveformExtractor.
    
    Note that the noise levels can be scaled or not, depending on the `return_scaled` parameter
    of the `SortingAnalyzer`.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    **kwargs : dict
        Additional parameters for the `spikeinterface.get_noise_levels()` function
    
    Returns
    -------
    noise_levels : np.array
        The noise level vector
  __init__(self, sorting_analyzer)

Class: ComputeRandomSpikes
  Docstring:
    AnalyzerExtension that select somes random spikes.
    This allows for a subsampling of spikes for further calculations and is important
    for managing that amount of memory and speed of computation in the analyzer.
    
    This will be used by the `waveforms`/`templates` extensions.
    
    This internally uses `random_spikes_selection()` parameters.
    
    Parameters
    ----------
    method : "uniform" | "all", default: "uniform"
        The method to select the spikes
    max_spikes_per_unit : int, default: 500
        The maximum number of spikes per unit, ignored if method="all"
    margin_size : int, default: None
        A margin on each border of segments to avoid border spikes, ignored if method="all"
    seed : int or None, default: None
        A seed for the random generator, ignored if method="all"
    
    Returns
    -------
    random_spike_indices: np.array
        The indices of the selected spikes
  __init__(self, sorting_analyzer)
  Method: get_random_spikes(self)
    Docstring:
      None
  Method: get_selected_indices_in_spike_train(self, unit_id, segment_index)
    Docstring:
      None

Class: ComputeTemplates
  Docstring:
    AnalyzerExtension that computes templates (average, std, median, percentile, ...)
    
    This depends on the "waveforms" extension (`SortingAnalyzer.compute("waveforms")`)
    
    When the "waveforms" extension is already computed, then the recording is not needed anymore for this extension.
    
    Note: by default only the average and std are computed. Other operators (std, median, percentile) can be computed on demand
    after the SortingAnalyzer.compute("templates") and then the data dict is updated on demand.
    
    Parameters
    ----------
    operators: list[str] | list[(str, float)] (for percentile)
        The operators to compute. Can be "average", "std", "median", "percentile"
        If percentile is used, then the second element of the tuple is the percentile to compute.
    
    Returns
    -------
    templates: np.ndarray
        The computed templates with shape (num_units, num_samples, num_channels)
  __init__(self, sorting_analyzer)
  Method: get_templates(self, unit_ids=None, operator='average', percentile=None, save=True, outputs='numpy')
    Docstring:
      Return templates (average, std, median or percentiles) for multiple units.
      
      If not computed yet then this is computed on demand and optionally saved.
      
      Parameters
      ----------
      unit_ids : list or None
          Unit ids to retrieve waveforms for
      operator : "average" | "median" | "std" | "percentile", default: "average"
          The operator to compute the templates
      percentile : float, default: None
          Percentile to use for operator="percentile"
      save : bool, default: True
          In case, the operator is not computed yet it can be saved to folder or zarr
      outputs : "numpy" | "Templates", default: "numpy"
          Whether to return a numpy array or a Templates object
      
      Returns
      -------
      templates : np.array | Templates
          The returned templates (num_units, num_samples, num_channels)
  Method: get_unit_template(self, unit_id, operator='average')
    Docstring:
      Return template for a single unit.
      
      Parameters
      ----------
      unit_id: str | int
          Unit id to retrieve waveforms for
      operator: str, default: "average"
           The operator to compute the templates
      
      Returns
      -------
      template: np.array
          The returned template (num_samples, num_channels)

Class: ComputeWaveforms
  Docstring:
    AnalyzerExtension that extract some waveforms of each units.
    
    The sparsity is controlled by the SortingAnalyzer sparsity.
    
    Parameters
    ----------
    ms_before : float, default: 1.0
        The number of ms to extract before the spike events
    ms_after : float, default: 2.0
        The number of ms to extract after the spike events
    dtype : None | dtype, default: None
        The dtype of the waveforms. If None, the dtype of the recording is used.
    
    Returns
    -------
    waveforms : np.ndarray
        Array with computed waveforms with shape (num_random_spikes, num_samples, num_channels)
  __init__(self, sorting_analyzer)
  Method: get_waveforms_one_unit(self, unit_id, force_dense: bool = False)
    Docstring:
      Returns the waveforms of a unit id.
      
      Parameters
      ----------
      unit_id : int or str
          The unit id to return waveforms for
      force_dense : bool, default: False
          If True, and SortingAnalyzer must be sparse then only waveforms on sparse channels are returned.
      
      Returns
      -------
      waveforms: np.array
          The waveforms (num_waveforms, num_samples, num_channels).
          In case sparsity is used, only the waveforms on sparse channels are returned.

Class: ConcatenateSegmentRecording
  Docstring:
    Return a recording that "concatenates" all segments from all parent recordings
    into one recording with a single segment. The operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    ignore_times: bool, default: True
        If True, time information (t_start, time_vector) is ignored when concatenating recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Class: ConcatenateSegmentSorting
  Docstring:
    Return a sorting that "concatenates" all segments from all sorting
    into one sorting with a single segment. This operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments. The returned spike times (originating
    from each segment) are returned as relative to the start of the concatenated segment.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings. If `total_samples_list` is not provided, all
        sortings should have an assigned recording.  Otherwise, all sortings
        should be monosegments.
    total_samples_list : list[int] or None, default: None
        If the sortings have no assigned recording, the total number of samples
        of each of the concatenated (monosegment) sortings is pulled from this
        list.
    ignore_times : bool, default: True
        If True, time information (t_start, time_vector) is ignored
        when concatenating the sortings' assigned recordings.
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, total_samples_list=None, ignore_times=True, sampling_frequency_max_diff=0)
  Method: get_num_samples(self, segment_index=None)
    Docstring:
      Overrides the BaseSorting method, which requires a recording.

Class: FrameSliceRecording
  Docstring:
    Class to get a lazy frame slice.
    Work only with mono segment recording.
    
    Do not use this class directly but use `recording.frame_slice(...)`
    
    Parameters
    ----------
    parent_recording: BaseRecording
    start_frame: None or int, default: None
        Earliest included frame in the parent recording.
        Times are re-referenced to start_frame in the
        sliced object. Set to 0 by default.
    end_frame: None or int, default: None
        Latest frame in the parent recording. As for usual
        python slicing, the end frame is excluded.
        Set to the recording's total number of samples by
        default
  __init__(self, parent_recording, start_frame=None, end_frame=None)

Class: FrameSliceSorting
  Docstring:
    Class to get a lazy frame slice.
    Work only with mono segment sorting.
    
    Do not use this class directly but use `sorting.frame_slice(...)`
    
    When a recording is registered for the parent sorting,
    a corresponding sliced recording is registered to the sliced sorting.
    
    Note that the returned sliced sorting may be empty.
    
    Parameters
    ----------
    parent_sorting: BaseSorting
    start_frame: None or int, default: None
        Earliest included frame in the parent sorting(/recording).
        Spike times(/traces) are re-referenced to start_frame in the
        sliced objects. Set to 0 if None.
    end_frame: None or int, default: None
        Latest frame in the parent sorting(/recording). As for usual
        python slicing, the end frame is excluded (such that the max
        spike frame in the sliced sorting is `end_frame - start_frame - 1`)
        If None, the end_frame is either:
            - The total number of samples, if a recording is assigned
            - The maximum spike frame + 1, if no recording is assigned
  __init__(self, parent_sorting, start_frame=None, end_frame=None, check_spike_frames=True)

Class: InjectTemplatesRecording
  Docstring:
    Class for creating a recording based on spike timings and templates.
    Can be just the templates or can add to an already existing recording.
    
    Parameters
    ----------
    sorting : BaseSorting
        Sorting object containing all the units and their spike train.
    templates : np.ndarray[n_units, n_samples, n_channels] | np.ndarray[n_units, n_samples, n_oversampling]
        Array containing the templates to inject for all the units.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce sampling jitter.
    nbefore : list[int] | int | None, default: None
        The number of samples before the peak of the template to align the spike.
        If None, will default to the highest peak.
    amplitude_factor : list[float] | float | None, default: None
        The amplitude of each spike for each unit.
        Can be None (no scaling).
        Can be scalar all spikes have the same factor (certainly useless).
        Can be a vector with same shape of spike_vector of the sorting.
    parent_recording : BaseRecording | None, default: None
        The recording over which to add the templates.
        If None, will default to traces containing all 0.
    num_samples : list[int] | int | None, default: None
        The number of samples in the recording per segment.
        You can use int for mono-segment objects.
    upsample_vector : np.array | None, default: None.
        When templates is 4d we can simulate a jitter.
        Optional the upsample_vector is the jitter index with a number per spike in range 0-templates.shape[3].
    check_borders : bool, default: False
        Checks if the border of the templates are zero.
    
    Returns
    -------
    injected_recording: InjectTemplatesRecording
        The recording with the templates injected.
  __init__(self, sorting: 'BaseSorting', templates: 'np.ndarray', nbefore: 'list[int] | int | None' = None, amplitude_factor: 'list[float] | float | None' = None, parent_recording: 'BaseRecording | None' = None, num_samples: 'list[int] | int | None' = None, upsample_vector: 'np.array | None' = None, check_borders: 'bool' = False) -> 'None'

Class: Motion
  Docstring:
    Motion of the tissue relative the probe.
    
    Parameters
    ----------
    displacement : numpy array 2d or list of
        Motion estimate in um.
        List is the number of segment.
        For each semgent :
    
            * shape (temporal bins, spatial bins)
            * motion.shape[0] = temporal_bins.shape[0]
            * motion.shape[1] = 1 (rigid) or spatial_bins.shape[1] (non rigid)
    temporal_bins_s : numpy.array 1d or list of
        temporal bins (bin center)
    spatial_bins_um : numpy.array 1d
        Windows center.
        spatial_bins_um.shape[0] == displacement.shape[1]
        If rigid then spatial_bins_um.shape[0] == 1
    direction : str, default: 'y'
        Direction of the motion.
    interpolation_method : str
        How to determine the displacement between bin centers? See the docs
        for scipy.interpolate.RegularGridInterpolator for options.
  __init__(self, displacement, temporal_bins_s, spatial_bins_um, direction='y', interpolation_method='linear')
  Method: check_properties(self)
    Docstring:
      None
  Method: copy(self)
    Docstring:
      None
  Method: from_dict(d)
    Docstring:
      None
  Method: get_boundaries(self)
    Docstring:
      None
  Method: get_displacement_at_time_and_depth(self, times_s, locations_um, segment_index=None, grid=False)
    Docstring:
      Evaluate the motion estimate at times and positions
      
      Evaluate the motion estimate, returning the (linearly interpolated) estimated displacement
      at the given times and locations.
      
      Parameters
      ----------
      times_s: np.array
      locations_um: np.array
          Either this is a one-dimensional array (a vector of positions along self.dimension), or
          else a 2d array with the 2 or 3 spatial dimensions indexed along axis=1.
      segment_index: int, default: None
          The index of the segment to evaluate. If None, and there is only one segment, then that segment is used.
      grid : bool, default: False
          If grid=False, the default, then times_s and locations_um should have the same one-dimensional
          shape, and the returned displacement[i] is the displacement at time times_s[i] and location
          locations_um[i].
          If grid=True, times_s and locations_um determine a grid of positions to evaluate the displacement.
          Then the returned displacement[i,j] is the displacement at depth locations_um[i] and time times_s[j].
      
      Returns
      -------
      displacement : np.array
          A displacement per input location, of shape times_s.shape if grid=False and (locations_um.size, times_s.size)
          if grid=True.
  Method: make_interpolators(self)
    Docstring:
      None
  Method: save(self, folder)
    Docstring:
      None
  Method: to_dict(self)
    Docstring:
      None

Class: NoiseGeneratorRecording
  Docstring:
    A lazy recording that generates white noise samples if and only if `get_traces` is called.
    
    This done by tiling small noise chunk.
    
    2 strategies to be reproducible across different start/end frame calls:
      * "tile_pregenerated": pregenerate a small noise block and tile it depending the start_frame/end_frame
      * "on_the_fly": generate on the fly small noise chunk and tile then. seed depend also on the noise block.
    
    
    Parameters
    ----------
    num_channels : int
        The number of channels.
    sampling_frequency : float
        The sampling frequency of the recorder.
    durations : list[float]
        The durations of each segment in seconds. Note that the length of this list is the number of segments.
    noise_levels : float | np.array, default: 1.0
        Std of the white noise (if an array, defined by per channels)
    cov_matrix : np.array | None, default: None
        The covariance matrix of the noise
    dtype : np.dtype | str | None, default: "float32"
        The dtype of the recording. Note that only np.float32 and np.float64 are supported.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and cusume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computaion (random)
    noise_block_size : int, default: 30000
        Size in sample of noise block.
    
    Notes
    -----
    If modifying this function, ensure that only one call to malloc is made per call get_traces to
    maintain the optimized memory profile.
  __init__(self, num_channels: 'int', sampling_frequency: 'float', durations: 'list[float]', noise_levels: 'float | np.array' = 1.0, cov_matrix: 'np.array | None' = None, dtype: 'np.dtype | str | None' = 'float32', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated', noise_block_size: 'int' = 30000)

Class: NpyFolderSnippets
  Docstring:
    NpyFolderSnippets is an internal format used in spikeinterface.
    It is a NpySnippetsExtractor + metadata contained in a folder.
    
    It is created with the function: `snippets.save(format="npy", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
        The path to the folder
    
    Returns
    -------
    snippets : NpyFolderSnippets
        The snippets
  __init__(self, folder_path)

Class: NpySnippetsExtractor
  Docstring:
    Dead simple and super light format based on the NPY numpy format.
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_paths, sampling_frequency, channel_ids=None, nbefore=None, gain_to_uV=None, offset_to_uV=None)
  Method: write_snippets(snippets, file_paths, dtype=None)
    Docstring:
      Save snippet extractor in binary .npy format.
      
      Parameters
      ----------
      snippets: SnippetsExtractor
          The snippets extractor object to be saved in .npy format
      file_paths: str
          The paths to the files.
      dtype: None, str or dtype
          Typecode or data-type to which the snippets will be cast.
      {}

Class: NpzFolderSorting
  Docstring:
    NpzFolderSorting is the old internal format used in spikeinterface (<=0.98.0)
    
    This a folder that contains:
    
      * "sorting_cached.npz" file in the NpzSortingExtractor format
      * "npz.json" which the json description of NpzSortingExtractor
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="npz_folder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    sorting : NpzFolderSorting
        The sorting
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NpzSortingExtractor
  Docstring:
    Dead simple and super light format based on the NPZ numpy format.
    https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NumpyEvent
  Docstring:
    Abstract class representing events.
    
    
    Parameters
    ----------
    channel_ids : list or np.array
        The channel ids
    structured_dtype : dtype or dict
        The dtype of the events. If dict, each key is the channel_id and values must be
        the dtype of the channel (also structured). If dtype, each channel is assigned the
        same dtype.
        In case of structured dtypes, the "time" or "timestamp" field name must be present.
  __init__(self, channel_ids, structured_dtype)
  Method: from_dict(event_dict_list)
    Docstring:
      Constructs NumpyEvent from a dictionary
      
      Parameters
      ----------
      event_dict_list : list
          List of dictionaries with channel_ids as keys and event data as values.
          Each list element corresponds to an event segment.
          If values have a simple dtype, they are considered the timestamps.
          If values have a structured dtype, the have to contain a "times" or "timestamps"
          field.
      
      Returns
      -------
      NumpyEvent
          The Event object

Class: NumpyFolderSorting
  Docstring:
    NumpyFolderSorting is the new internal format used in spikeinterface (>=0.99.0) for caching sorting objects.
    
    It is a simple folder that contains:
      * a file "spike.npy" (numpy format) with all flatten spikes (using sorting.to_spike_vector())
      * a "numpysorting_info.json" containing sampling_frequency, unit_ids and num_segments
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="numpy_folder")`
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NumpyRecording
  Docstring:
    In memory recording.
    Contrary to previous version this class does not handle npy files.
    
    Parameters
    ----------
    traces_list :  list of array or array (if mono segment)
        The traces to instantiate a mono or multisegment Recording
    sampling_frequency : float
        The sampling frequency in Hz
    t_starts : None or list of float
        Times in seconds of the first sample for each segment
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
  __init__(self, traces_list, sampling_frequency, t_starts=None, channel_ids=None)
  Method: from_recording(source_recording, **job_kwargs)
    Docstring:
      None

Class: NumpySnippets
  Docstring:
    In memory recording.
    Contrary to previous version this class does not handle npy files.
    
    Parameters
    ----------
    snippets_list :  list of array or array (if mono segment)
        The snippets to instantiate a mono or multisegment basesnippet
    spikesframes_list : list of array or array (if mono segment)
        Frame of each snippet
    sampling_frequency : float
        The sampling frequency in Hz
    
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
  __init__(self, snippets_list, spikesframes_list, sampling_frequency, nbefore=None, channel_ids=None)

Class: NumpySorting
  Docstring:
    In memory sorting object.
    The internal representation is always done with a long "spike vector".
    
    
    But we have convenient class methods to instantiate from:
      * other sorting object: `NumpySorting.from_sorting()`
      * from samples+labels: `NumpySorting.from_samples_and_labels()`
      * from times+labels: `NumpySorting.from_times_and_labels()`
      * from dict of list: `NumpySorting.from_unit_dict()`
      * from neo: `NumpySorting.from_neo_spiketrain_list()`
    
    Parameters
    ----------
    spikes :  numpy.array
        A numpy vector, the one given by Sorting.to_spike_vector().
    sampling_frequency : float
        The sampling frequency in Hz
    channel_ids : list
        A list of unit_ids.
  __init__(self, spikes, sampling_frequency, unit_ids)
  Method: from_neo_spiketrain_list(neo_spiketrains, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct a NumpySorting with a neo spiketrain list.
      
      If this is a list of list, it is multi segment.
      
      Parameters
      ----------
  Method: from_peaks(peaks, sampling_frequency, unit_ids) -> "'NumpySorting'"
    Docstring:
      Construct a sorting from peaks returned by 'detect_peaks()' function.
      The unit ids correspond to the recording channel ids and spike trains are the
      detected spikes for each channel.
      
      Parameters
      ----------
      peaks : structured np.array
          Peaks array as returned by the 'detect_peaks()' function
      sampling_frequency : float
          the sampling frequency in Hz
      unit_ids : np.array
          The unit_ids vector which is generally the channel_ids but can be different.
      
      Returns
      -------
      sorting
          The NumpySorting object
  Method: from_samples_and_labels(samples_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting extractor from:
        * an array of spike samples
        * an array of spike labels
      In case of multisegment, it is a list of array.
      
      Parameters
      ----------
      samples_list : list of array (or array)
          An array of spike samples
      labels_list : list of array (or array)
          An array of spike labels corresponding to the given times
      unit_ids : list or None, default: None
          The explicit list of unit_ids that should be extracted from labels_list
          If None, then it will be np.unique(labels_list)
  Method: from_sorting(source_sorting: 'BaseSorting', with_metadata=False, copy_spike_vector=False) -> "'NumpySorting'"
    Docstring:
      Create a numpy sorting from another sorting extractor
  Method: from_times_and_labels(times_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting extractor from:
        * an array of spike times (in s)
        * an array of spike labels
      In case of multisegment, it is a list of array.
      
      Parameters
      ----------
      times_list : list of array (or array)
          An array of spike samples
      labels_list : list of array (or array)
          An array of spike labels corresponding to the given times
      unit_ids : list or None, default: None
          The explicit list of unit_ids that should be extracted from labels_list
          If None, then it will be np.unique(labels_list)
  Method: from_times_labels(times_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      None
  Method: from_unit_dict(units_dict_list, sampling_frequency) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting from a list of dict.
      The list length is the segment count.
      Each dict have unit_ids as keys and spike times as values.
      
      Parameters
      ----------
      dict_list : list of dict

Class: SelectSegmentRecording
  Docstring:
    Return a new recording with a subset of segments from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, recording: 'BaseRecording', segment_indices: 'int | list[int]')

Class: SelectSegmentSorting
  Docstring:
    Return a new sorting with a single segment from a multi-segment sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The multi-segment sorting
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, sorting: 'BaseSorting', segment_indices: 'int | list[int]')

Class: SharedMemoryRecording
  Docstring:
    In memory recording with shared memmory buffer.
    
    Parameters
    ----------
    shm_names : list
        List of sharedmem names for each segment
    shape_list : list
        List of shape of sharedmem buffer for each segment
        The first dimension is the number of samples, the second is the number of channels.
        Note that the number of channels must be the same for all segments
    sampling_frequency : float
        The sampling frequency in Hz
    t_starts : None or list of float
        Times in seconds of the first sample for each segment
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
    main_shm_owner : bool, default: True
        If True, the main instance will unlink the sharedmem buffer when deleted
  __init__(self, shm_names, shape_list, dtype, sampling_frequency, channel_ids=None, t_starts=None, main_shm_owner=True)
  Method: from_recording(source_recording, **job_kwargs)
    Docstring:
      None

Class: SharedMemorySorting
  Docstring:
    Abstract class representing several segment several units and relative spiketrains.
  __init__(self, shm_name, shape, sampling_frequency, unit_ids, dtype=[('sample_index', 'int64'), ('unit_index', 'int64'), ('segment_index', 'int64')], main_shm_owner=True)
  Method: from_sorting(source_sorting, with_metadata=False)
    Docstring:
      None

Class: SortingAnalyzer
  Docstring:
    Class to make a pair of Recording-Sorting which will be used used for all post postprocessing,
    visualization and quality metric computation.
    
    This internally maintains a list of computed ResultExtention (waveform, pca, unit position, spike position, ...).
    
    This can live in memory and/or can be be persistent to disk in 2 internal formats (folder/json/npz or zarr).
    A SortingAnalyzer can be transfer to another format using `save_as()`
    
    This handle unit sparsity that can be propagated to ResultExtention.
    
    This handle spike sampling that can be propagated to ResultExtention : works on only a subset of spikes.
    
    This internally saves a copy of the Sorting and extracts main recording attributes (without traces) so
    the SortingAnalyzer object can be reloaded even if references to the original sorting and/or to the original recording
    are lost.
    
    SortingAnalyzer() should not never be used directly for creating: use instead create_sorting_analyzer(sorting, resording, ...)
    or eventually SortingAnalyzer.create(...)
  __init__(self, sorting: 'BaseSorting', recording: 'BaseRecording | None' = None, rec_attributes: 'dict | None' = None, format: 'str | None' = None, sparsity: 'ChannelSparsity | None' = None, return_scaled: 'bool' = True, backend_options: 'dict | None' = None)
  Method: are_units_mergeable(self, merge_unit_groups: 'list[str | int]', merging_mode: 'str' = 'soft', sparsity_overlap: 'float' = 0.75, return_masks: 'bool' = False)
    Docstring:
      Check if soft merges can be performed given sparsity_overlap param.
      
      Parameters
      ----------
      merge_unit_groups : list/tuple of lists/tuples
          A list of lists for every merge group. Each element needs to have at least two elements
          (two units to merge).
      merging_mode : "soft" | "hard", default: "soft"
          How merges are performed. In the "soft" mode, merges will be approximated, with no smart merging
          of the extension data.
      sparsity_overlap : float, default: 0.75
          The percentage of overlap that units should share in order to accept merges.
      return_masks : bool, default: False
          If True, return the masks used for the merge.
      
      Returns
      -------
      mergeable : dict[bool]
          Dictionary of of mergeable units. The keys are the merge unit groups (as tuple), and boolean
          values indicate if the merge is possible.
      masks : dict[np.array]
          Dictionary of masks used for the merge. The keys are the merge unit groups, and the values
          are the masks used for the merge.
  Method: channel_ids_to_indices(self, channel_ids) -> 'np.ndarray'
    Docstring:
      None
  Method: compute(self, input, save=True, extension_params=None, verbose=False, **kwargs) -> "'AnalyzerExtension | None'"
    Docstring:
      Compute one extension or several extensiosn.
      Internally calls compute_one_extension() or compute_several_extensions() depending on the input type.
      
      Parameters
      ----------
      input : str or dict or list
          The extensions to compute, which can be passed as:
          * a string: compute one extension. Additional parameters can be passed as key word arguments.
          * a dict: compute several extensions. The keys are the extension names and the values are dictionaries with the extension parameters.
          * a list: compute several extensions. The list contains the extension names. Additional parameters can be passed with the extension_params
          argument.
      save : bool, default: True
          If True the extension is saved to disk (only if sorting analyzer format is not "memory")
      extension_params : dict or None, default: None
          If input is a list, this parameter can be used to specify parameters for each extension.
          The extension_params keys must be included in the input list.
      **kwargs:
          All other kwargs are transmitted to extension.set_params() (if input is a string) or job_kwargs
      
      Returns
      -------
      extension : SortingAnalyzerExtension | None
          The extension instance if input is a string, None otherwise.
      
      Examples
      --------
      This function accepts the following possible signatures for flexibility:
      
      Compute one extension, with parameters:
      >>> analyzer.compute("waveforms", ms_before=1.5, ms_after=2.5)
      
      Compute two extensions with a list as input and with default parameters:
      >>> analyzer.compute(["random_spikes", "waveforms"])
      
      Compute two extensions with dict as input, one dict per extension
      >>> analyzer.compute({"random_spikes":{}, "waveforms":{"ms_before":1.5, "ms_after", "2.5"}})
      
      Compute two extensions with an input list specifying custom parameters for one
      (the other will use default parameters):
      >>> analyzer.compute(["random_spikes", "waveforms"],extension_params={"waveforms":{"ms_before":1.5, "ms_after": "2.5"}})
  Method: compute_one_extension(self, extension_name, save=True, verbose=False, **kwargs) -> "'AnalyzerExtension'"
    Docstring:
      Compute one extension.
      
      Important note: when computing again an extension, all extensions that depend on it
      will be automatically and silently deleted to keep a coherent data.
      
      Parameters
      ----------
      extension_name : str
          The name of the extension.
          For instance "waveforms", "templates", ...
      save : bool, default: True
          It the extension can be saved then it is saved.
          If not then the extension will only live in memory as long as the object is deleted.
          save=False is convenient to try some parameters without changing an already saved extension.
      
      **kwargs:
          All other kwargs are transmitted to extension.set_params() or job_kwargs
      
      Returns
      -------
      result_extension : AnalyzerExtension
          Return the extension instance
      
      Examples
      --------
      
      >>> Note that the return is the instance extension.
      >>> extension = sorting_analyzer.compute("waveforms", **some_params)
      >>> extension = sorting_analyzer.compute_one_extension("waveforms", **some_params)
      >>> wfs = extension.data["waveforms"]
      >>> # Note this can be be done in the old way style BUT the return is not the same it return directly data
      >>> wfs = compute_waveforms(sorting_analyzer, **some_params)
  Method: compute_several_extensions(self, extensions, save=True, verbose=False, **job_kwargs)
    Docstring:
      Compute several extensions
      
      Important note: when computing again an extension, all extensions that depend on it
      will be automatically and silently deleted to keep a coherent data.
      
      
      Parameters
      ----------
      extensions : dict
          Keys are extension_names and values are params.
      save : bool, default: True
          It the extension can be saved then it is saved.
          If not then the extension will only live in memory as long as the object is deleted.
          save=False is convenient to try some parameters without changing an already saved extension.
      
      Returns
      -------
      No return
      
      Examples
      --------
      
      >>> sorting_analyzer.compute({"waveforms": {"ms_before": 1.2}, "templates" : {"operators": ["average", "std", ]} })
      >>> sorting_analyzer.compute_several_extensions({"waveforms": {"ms_before": 1.2}, "templates" : {"operators": ["average", "std"]}})
  Method: copy(self)
    Docstring:
      Create a a copy of SortingAnalyzer with format "memory".
  Method: delete_extension(self, extension_name) -> 'None'
    Docstring:
      Delete the extension from the dict and also in the persistent zarr or folder.
  Method: get_channel_locations(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_computable_extensions(self)
    Docstring:
      Get all extensions that can be computed by the analyzer.
  Method: get_default_extension_params(self, extension_name: 'str') -> 'dict'
    Docstring:
      Get the default params for an extension.
      
      Parameters
      ----------
      extension_name : str
          The extension name
      
      Returns
      -------
      default_params : dict
          The default parameters for the extension
  Method: get_dtype(self)
    Docstring:
      None
  Method: get_extension(self, extension_name: 'str')
    Docstring:
      Get a AnalyzerExtension.
      If not loaded then load is automatic.
      
      Return None if the extension is not computed yet (this avoids the use of has_extension() and then get it)
  Method: get_loaded_extension_names(self)
    Docstring:
      Return the loaded or already computed extensions names.
  Method: get_num_channels(self) -> 'int'
    Docstring:
      None
  Method: get_num_samples(self, segment_index: 'Optional[int]' = None) -> 'int'
    Docstring:
      None
  Method: get_num_segments(self) -> 'int'
    Docstring:
      None
  Method: get_num_units(self) -> 'int'
    Docstring:
      None
  Method: get_probe(self)
    Docstring:
      None
  Method: get_probegroup(self)
    Docstring:
      None
  Method: get_recording_property(self, key) -> 'np.ndarray'
    Docstring:
      None
  Method: get_saved_extension_names(self)
    Docstring:
      Get extension names saved in folder or zarr that can be loaded.
      This do not load data, this only explores the directory.
  Method: get_sorting_property(self, key) -> 'np.ndarray'
    Docstring:
      None
  Method: get_sorting_provenance(self)
    Docstring:
      Get the original sorting if possible otherwise return None
  Method: get_total_duration(self) -> 'float'
    Docstring:
      None
  Method: get_total_samples(self) -> 'int'
    Docstring:
      None
  Method: has_extension(self, extension_name: 'str') -> 'bool'
    Docstring:
      Check if the extension exists in memory (dict) or in the folder or in zarr.
  Method: has_recording(self) -> 'bool'
    Docstring:
      None
  Method: has_temporary_recording(self) -> 'bool'
    Docstring:
      None
  Method: is_filtered(self) -> 'bool'
    Docstring:
      None
  Method: is_read_only(self) -> 'bool'
    Docstring:
      None
  Method: is_sparse(self) -> 'bool'
    Docstring:
      None
  Method: load_all_saved_extension(self)
    Docstring:
      Load all saved extensions in memory.
  Method: load_extension(self, extension_name: 'str')
    Docstring:
      Load an extension from a folder or zarr into the `ResultSorting.extensions` dict.
      
      Parameters
      ----------
      extension_name : str
          The extension name.
      
      Returns
      -------
      ext_instance:
          The loaded instance of the extension
  Method: merge_units(self, merge_unit_groups, new_unit_ids=None, censor_ms=None, merging_mode='soft', sparsity_overlap=0.75, new_id_strategy='append', return_new_unit_ids=False, format='memory', folder=None, verbose=False, **job_kwargs) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with a list of merges that have to be achieved.
      Merges units by creating a new SortingAnalyzer object with the appropriate merges
      
      Extensions are also updated to display the merged `unit_ids`.
      
      Parameters
      ----------
      merge_unit_groups : list/tuple of lists/tuples
          A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
          but it can also have more (merge multiple units at once).
      new_unit_ids : None | list, default: None
          A new unit_ids for merged units. If given, it needs to have the same length as `merge_unit_groups`. If None,
          merged units will have the first unit_id of every lists of merges
      censor_ms : None | float, default: None
          When merging units, any spikes violating this refractory period will be discarded. If None all units are kept
      merging_mode : ["soft", "hard"], default: "soft"
          How merges are performed. If the `merge_mode` is "soft" , merges will be approximated, with no reloading of the
          waveforms. This will lead to approximations. If `merge_mode` is "hard", recomputations are accurately performed,
          reloading waveforms if needed
      sparsity_overlap : float, default 0.75
          The percentage of overlap that units should share in order to accept merges. If this criteria is not
          achieved, soft merging will not be possible and an error will be raised
      new_id_strategy : "append" | "take_first", default: "append"
          The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
      
              * "append" : new_units_ids will be added at the end of max(sorting.unit_ids)
              * "take_first" : new_unit_ids will be the first unit_id of every list of merges
      return_new_unit_ids : bool, default False
          Alse return new_unit_ids which are the ids of the new units.
      folder : Path | None, default: None
          The new folder where the analyzer with merged units is copied for `format` "binary_folder" or "zarr"
      format : "memory" | "binary_folder" | "zarr", default: "memory"
          The format of SortingAnalyzer
      verbose : bool, default: False
          Whether to display calculations (such as sparsity estimation)
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create `SortingAnalyzer` with the selected units
  Method: remove_units(self, remove_unit_ids, format='memory', folder=None) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with removal of a subset of units.
      Filters units by creating a new sorting analyzer object in a new folder.
      
      Extensions are also updated to remove the unit ids.
      
      Parameters
      ----------
      remove_unit_ids : list or array
          The unit ids to remove in the new SortingAnalyzer object.
      format : "memory" | "binary_folder" | "zarr" , default: "memory"
          The format of the returned SortingAnalyzer.
      folder : Path or None, default: None
          The new folder where the analyzer without removed units is copied if `format`
          is "binary_folder" or "zarr"
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create sorting_analyzer with the selected units
  Method: save_as(self, format='memory', folder=None, backend_options=None) -> "'SortingAnalyzer'"
    Docstring:
      Save SortingAnalyzer object into another format.
      Uselful for memory to zarr or memory to binary.
      
      Note that the recording provenance or sorting provenance can be lost.
      
      Mainly propagates the copied sorting and recording properties.
      
      Parameters
      ----------
      folder : str | Path | None, default: None
          The output folder if `format` is "zarr" or "binary_folder"
      format : "memory" | "binary_folder" | "zarr", default: "memory"
          The new backend format to use
      backend_options : dict | None, default: None
          Keyword arguments for the backend specified by format. It can contain the:
      
              * storage_options: dict | None (fsspec storage options)
              * saving_options: dict | None (additional saving options for creating and saving datasets, e.g. compression/filters for zarr)
  Method: select_units(self, unit_ids, format='memory', folder=None) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with a subset of units.
      Filters units by creating a new sorting analyzer object in a new folder.
      
      Extensions are also updated to filter the selected unit ids.
      
      Parameters
      ----------
      unit_ids : list or array
          The unit ids to keep in the new SortingAnalyzer object
      format : "memory" | "binary_folder" | "zarr" , default: "memory"
          The format of the returned SortingAnalyzer.
      folder : Path | None, deafult: None
          The new folder where the analyzer with selected units is copied if `format` is
          "binary_folder" or "zarr"
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create sorting_analyzer with the selected units
  Method: set_sorting_property(self, key, values: 'list | np.ndarray | tuple', ids: 'list | np.ndarray | tuple | None' = None, missing_value: 'Any' = None, save: 'bool' = True) -> 'None'
    Docstring:
      Set property vector for unit ids.
      
      If the SortingAnalyzer backend is in memory, the property will be only set in memory.
      If the SortingAnalyzer backend is `binary_folder` or `zarr`, the property will also
      be saved to to the backend.
      
      Parameters
      ----------
      key : str
          The property name
      values : np.array
          Array of values for the property
      ids : list/np.array, default: None
          List of subset of ids to set the values.
          if None all the ids are set or changed
      missing_value : Any, default: None
          In case the property is set on a subset of values ("ids" not None),
          This argument specifies how to fill missing values.
          The `missing_value` is required for types int and unsigned int.
      save : bool, default: True
          If True, the property is saved to the backend if possible.
  Method: set_temporary_recording(self, recording: 'BaseRecording', check_dtype: 'bool' = True)
    Docstring:
      Sets a temporary recording object. This function can be useful to temporarily set
      a "cached" recording object that is not saved in the SortingAnalyzer object to speed up
      computations. Upon reloading, the SortingAnalyzer object will try to reload the recording
      from the original location in a lazy way.
      
      
      Parameters
      ----------
      recording : BaseRecording
          The recording object to set as temporary recording.
      check_dtype : bool, default: True
          If True, check that the dtype of the temporary recording is the same as the original recording.

Class: SpikeVectorSortingSegment
  Docstring:
    A sorting segment that stores spike times as a spike vector.
  __init__(self, spikes, segment_index, unit_ids)
  Method: get_unit_spike_train(self, unit_id, start_frame, end_frame)
    Docstring:
      Get the spike train for a unit.
      
      Parameters
      ----------
      unit_id
      start_frame : int, default: None
      end_frame : int, default: None
      
      Returns
      -------
      np.ndarray

Class: SplitSegmentSorting
  Docstring:
    Splits a sorting with a single segment to multiple segments
    based on the given list of recordings (must be in order)
    
    Parameters
    ----------
    parent_sorting : BaseSorting
        Sorting with a single segment (e.g. from sorting concatenated recording)
    recording_or_recording_list : list of recordings, ConcatenateSegmentRecording, or None, default: None
        If list of recordings, uses the lengths of those recordings to split the sorting
        into smaller segments
        If ConcatenateSegmentRecording, uses the associated list of recordings to split
        the sorting into smaller segments
        If None, looks for the recording associated with the sorting
  __init__(self, parent_sorting: 'BaseSorting', recording_or_recording_list=None)

Class: Templates
  Docstring:
    A class to represent spike templates, which can be either dense or sparse.
    
    Parameters
    ----------
    templates_array : np.ndarray
        Array containing the templates data.
    sampling_frequency : float
        Sampling frequency of the templates.
    nbefore : int
        Number of samples before the spike peak.
    sparsity_mask : np.ndarray or None, default: None
        Boolean array indicating the sparsity pattern of the templates.
        If `None`, the templates are considered dense.
    channel_ids : np.ndarray, optional default: None
        Array of channel IDs. If `None`, defaults to an array of increasing integers.
    unit_ids : np.ndarray, optional default: None
        Array of unit IDs. If `None`, defaults to an array of increasing integers.
    probe: Probe, default: None
        A `probeinterface.Probe` object
    is_scaled : bool, optional default: True
        If True, it means that the templates are in uV, otherwise they are in raw ADC values.
    check_for_consistent_sparsity : bool, optional default: None
        When passing a sparsity_mask, this checks that the templates array is also sparse and that it matches the
        structure of the sparsity_mask. If False, this check is skipped.
    
    The following attributes are available after construction:
    
    Attributes
    ----------
    num_units : int
        Number of units in the templates. Automatically determined from `templates_array`.
    num_samples : int
        Number of samples per template. Automatically determined from `templates_array`.
    num_channels : int
        Number of channels in the templates. Automatically determined from `templates_array` or `sparsity_mask`.
    nafter : int
        Number of samples after the spike peak. Calculated as `num_samples - nbefore - 1`.
    ms_before : float
        Milliseconds before the spike peak. Calculated from `nbefore` and `sampling_frequency`.
    ms_after : float
        Milliseconds after the spike peak. Calculated from `nafter` and `sampling_frequency`.
    sparsity : ChannelSparsity, optional
        Object representing the sparsity pattern of the templates. Calculated from `sparsity_mask`.
        If `None`, the templates are considered dense.
  __init__(self, templates_array: 'np.ndarray', sampling_frequency: 'float', nbefore: 'int', is_scaled: 'bool' = True, sparsity_mask: 'np.ndarray' = None, channel_ids: 'np.ndarray' = None, unit_ids: 'np.ndarray' = None, probe: 'Probe' = None, check_for_consistent_sparsity: 'bool' = True) -> None
  Method: add_templates_to_zarr_group(self, zarr_group: "'zarr.Group'") -> 'None'
    Docstring:
      Adds a serialized version of the object to a given Zarr group.
      
      It is the inverse of the `from_zarr_group` method.
      
      Parameters
      ----------
      zarr_group : zarr.Group
          The Zarr group to which the template object will be serialized.
      
      Notes
      -----
      This method will create datasets within the Zarr group for `templates_array`,
      `channel_ids`, and `unit_ids`. It will also add `sampling_frequency` and `nbefore`
      as attributes to the group. If `sparsity_mask` and `probe` are not None, they will
      be included as a dataset and a subgroup, respectively.
      
      The `templates_array` dataset is saved with a chunk size that has a single unit per chunk
      to optimize read/write operations for individual units.
  Method: are_templates_sparse(self) -> 'bool'
    Docstring:
      None
  Method: from_zarr(folder_path: 'str | Path') -> "'Templates'"
    Docstring:
      Deserialize the Templates object from a Zarr file located at the given folder path.
      
      Parameters
      ----------
      folder_path : str | Path
          The path to the folder where the Zarr file is located.
      
      Returns
      -------
      Templates
          An instance of Templates initialized with data from the Zarr file.
  Method: get_channel_locations(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_dense_templates(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_one_template_dense(self, unit_index)
    Docstring:
      None
  Method: select_channels(self, channel_ids) -> 'Templates'
    Docstring:
      Return a new Templates object with only the selected channels.
      This operation can be useful to remove bad channels for hybrid recording
      generation.
      
      Parameters
      ----------
      channel_ids : list
          List of channel IDs to select.
  Method: select_units(self, unit_ids) -> 'Templates'
    Docstring:
      Return a new Templates object with only the selected units.
      
      Parameters
      ----------
      unit_ids : list
          List of unit IDs to select.
  Method: to_dict(self)
    Docstring:
      None
  Method: to_json(self)
    Docstring:
      None
  Method: to_sparse(self, sparsity)
    Docstring:
      None
  Method: to_zarr(self, folder_path: 'str | Path') -> 'None'
    Docstring:
      Saves the object's data to a Zarr file in the specified folder.
      
      Use the `add_templates_to_zarr_group` method to serialize the object to a Zarr group and then
      save the group to a Zarr file.
      
      Parameters
      ----------
      folder_path : str | Path
          The path to the folder where the Zarr data will be saved.

Class: UnitsAggregationSorting
  Docstring:
    Aggregates units of multiple sortings into a single sorting object
    
    Parameters
    ----------
    sorting_list: list
        List of BaseSorting objects to aggregate
    renamed_unit_ids: array-like
        If given, unit ids are renamed as provided. If None, unit ids are sequential integers.
    
    Returns
    -------
    aggregate_sorting: UnitsAggregationSorting
        The aggregated sorting object
  __init__(self, sorting_list, renamed_unit_ids=None)

Class: UnitsSelectionSorting
  Docstring:
    Class that handles slicing of a Sorting object based on a list of unit_ids.
    
    Do not use this class directly but use `sorting.select_units(...)`
  __init__(self, parent_sorting, unit_ids=None, renamed_unit_ids=None)

Class: ZarrRecordingExtractor
  Docstring:
    RecordingExtractor for a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root folder. This can be a local path or a remote path (s3:// or gcs://).
        If the path is a remote path, the storage_options can be provided to specify credentials.
        If the remote path is not accessible and backend_options is not provided,
        the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    
    Returns
    -------
    recording : ZarrRecordingExtractor
        The recording Extractor
  __init__(self, folder_path: 'Path | str', storage_options: 'dict | None' = None)
  Method: write_recording(recording: 'BaseRecording', folder_path: 'str | Path', storage_options: 'dict | None' = None, **kwargs)
    Docstring:
      None

Class: ZarrSortingExtractor
  Docstring:
    SortingExtractor for a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root file. This can be a local path or a remote path (s3:// or gcs://).
        If the path is a remote path, the storage_options can be provided to specify credentials.
        If the remote path is not accessible and backend_options is not provided,
        the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    zarr_group : str or None, default: None
        Optional zarr group path to load the sorting from. This can be used when the sorting is not stored at the root, but in sub group.
    Returns
    -------
    sorting : ZarrSortingExtractor
        The sorting Extractor
  __init__(self, folder_path: 'Path | str', storage_options: 'dict | None' = None, zarr_group: 'str | None' = None)
  Method: write_sorting(sorting: 'BaseSorting', folder_path: 'str | Path', storage_options: 'dict | None' = None, **kwargs)
    Docstring:
      Write a sorting extractor to zarr format.

Function: add_synchrony_to_sorting(sorting, sync_event_ratio=0.3, seed=None)
  Docstring:
    Generates sorting object with added synchronous events from an existing sorting objects.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object.
    sync_event_ratio : float, default: 0.3
        The ratio of added synchronous spikes with respect to the total number of spikes.
        E.g., 0.5 means that the final sorting will have 1.5 times number of spikes, and all the extra
        spikes are synchronous (same sample_index), but on different units (not duplicates).
    seed : int, default: None
        The random seed.
    
    
    Returns
    -------
    sorting : TransformSorting
        The sorting object, keeping track of added spikes.

Function: aggregate_channels(recording_list_or_dict=None, renamed_channel_ids=None, recording_list=None)
  Docstring:
    Aggregates channels of multiple recording into a single recording object
    
    Parameters
    ----------
    recording_list_or_dict: list | dict
        List or dict of BaseRecording objects to aggregate.
    renamed_channel_ids: array-like
        If given, channel ids are renamed as provided.
    
    Returns
    -------
    aggregate_recording: ChannelsAggregationRecording
        The aggregated recording object

Class: aggregate_units
  Docstring:
    Aggregates units of multiple sortings into a single sorting object
    
    Parameters
    ----------
    sorting_list: list
        List of BaseSorting objects to aggregate
    renamed_unit_ids: array-like
        If given, unit ids are renamed as provided. If None, unit ids are sequential integers.
    
    Returns
    -------
    aggregate_sorting: UnitsAggregationSorting
        The aggregated sorting object
  __init__(self, sorting_list, renamed_unit_ids=None)

Class: append_recordings
  Docstring:
    Takes as input a list of parent recordings each with multiple segments and
    returns a single multi-segment recording that "appends" all segments from
    all parent recordings.
    
    For instance, given one recording with 2 segments and one recording with 3 segments,
    this class will give one recording with 5 segments
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, sampling_frequency_max_diff=0)

Class: append_sortings
  Docstring:
    Return a sorting that "append" all segments from all sorting
    into one sorting multi segment.
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, sampling_frequency_max_diff=0)

Function: apply_merges_to_sorting(sorting, merge_unit_groups, new_unit_ids=None, censor_ms=None, return_extra=False, new_id_strategy='append')
  Docstring:
    Apply a resolved representation of the merges to a sorting object.
    
    This function is not lazy and creates a new NumpySorting with a compact spike_vector as fast as possible.
    
    If `censor_ms` is not None, duplicated spikes violating the `censor_ms` refractory period are removed.
    
    Optionally, the boolean mask of kept spikes is returned.
    
    Parameters
    ----------
    sorting : Sorting
        The Sorting object to apply merges.
    merge_unit_groups : list/tuple of lists/tuples
        A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
        but it can also have more (merge multiple units at once).
    new_unit_ids : list | None, default: None
        A new unit_ids for merged units. If given, it needs to have the same length as `merge_unit_groups`. If None,
        merged units will have the first unit_id of every lists of merges.
    censor_ms: float | None, default: None
        When applying the merges, should be discard consecutive spikes violating a given refractory per
    return_extra : bool, default: False
        If True, also return also a boolean mask of kept spikes and new_unit_ids.
    new_id_strategy : "append" | "take_first", default: "append"
        The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
    
            * "append" : new_units_ids will be added at the end of max(sorging.unit_ids)
            * "take_first" : new_unit_ids will be the first unit_id of every list of merges
    
    Returns
    -------
    sorting :  The new Sorting object
        The newly create sorting with the merged units
    keep_mask : numpy.array
        A boolean mask, if censor_ms is not None, telling which spike from the original spike vector
        has been kept, given the refractory period violations (None if censor_ms is None)

Function: compute_sparsity(templates_or_sorting_analyzer: "'Templates | SortingAnalyzer'", noise_levels: 'np.ndarray | None' = None, method: "'radius' | 'best_channels' | 'closest_channels' | 'snr' | 'amplitude' | 'energy' | 'by_property' | 'ptp'" = 'radius', peak_sign: "'neg' | 'pos' | 'both'" = 'neg', num_channels: 'int | None' = 5, radius_um: 'float | None' = 100.0, threshold: 'float | None' = 5, by_property: 'str | None' = None, amplitude_mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum') -> 'ChannelSparsity'
  Docstring:
    Compute channel sparsity from a `SortingAnalyzer` for each template with several methods.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object.
        Some methods accept both objects ("best_channels", "radius", )
        Other methods require only SortingAnalyzer because internally the recording is needed.
    
    
    method : str
        * "best_channels" : N best channels with the largest amplitude. Use the "num_channels" argument to specify the
                           number of channels.
        * "closest_channels" : N closest channels according to the distance. Use the "num_channels" argument to specify the
                           number of channels.
        * "radius" : radius around the best channel. Use the "radius_um" argument to specify the radius in um.
        * "snr" : threshold based on template signal-to-noise ratio. Use the "threshold" argument
                 to specify the SNR threshold (in units of noise levels) and the "amplitude_mode" argument
                 to specify the mode to compute the amplitude of the templates.
        * "amplitude" : threshold based on the amplitude values on every channels. Use the "threshold" argument
                     to specify the ptp threshold (in units of amplitude) and the "amplitude_mode" argument
                     to specify the mode to compute the amplitude of the templates.
        * "energy" : threshold based on the expected energy that should be present on the channels,
                    given their noise levels. Use the "threshold" argument to specify the energy threshold
                    (in units of noise levels)
        * "by_property" : sparsity is given by a property of the recording and sorting (e.g. "group").
                         In this case the sparsity for each unit is given by the channels that have the same property
                         value as the unit. Use the "by_property" argument to specify the property name.
        * "ptp: : deprecated, use the 'snr' method with the 'peak_to_peak' amplitude mode instead.
    
    peak_sign : "neg" | "pos" | "both"
        Sign of the template to compute best channels.
    num_channels : int
        Number of channels for "best_channels" method.
    radius_um : float
        Radius in um for "radius" method.
    threshold : float
        Threshold for "snr", "energy" (in units of noise levels) and "ptp" methods (in units of amplitude).
        For the "snr" method, the template amplitude mode is controlled by the "amplitude_mode" argument.
    amplitude_mode : "extremum" | "at_index" | "peak_to_peak"
        Mode to compute the amplitude of the templates for the "snr", "amplitude", and "best_channels" methods.
    by_property : object
        Property name for "by_property" method.
    
    
    Returns
    -------
    sparsity : ChannelSparsity
        The estimated sparsity

Class: concatenate_recordings
  Docstring:
    Return a recording that "concatenates" all segments from all parent recordings
    into one recording with a single segment. The operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    ignore_times: bool, default: True
        If True, time information (t_start, time_vector) is ignored when concatenating recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Class: concatenate_sortings
  Docstring:
    Return a sorting that "concatenates" all segments from all sorting
    into one sorting with a single segment. This operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments. The returned spike times (originating
    from each segment) are returned as relative to the start of the concatenated segment.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings. If `total_samples_list` is not provided, all
        sortings should have an assigned recording.  Otherwise, all sortings
        should be monosegments.
    total_samples_list : list[int] or None, default: None
        If the sortings have no assigned recording, the total number of samples
        of each of the concatenated (monosegment) sortings is pulled from this
        list.
    ignore_times : bool, default: True
        If True, time information (t_start, time_vector) is ignored
        when concatenating the sortings' assigned recordings.
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, total_samples_list=None, ignore_times=True, sampling_frequency_max_diff=0)
  Method: get_num_samples(self, segment_index=None)
    Docstring:
      Overrides the BaseSorting method, which requires a recording.

Function: create_extractor_from_new_recording(new_recording)
  Docstring:
    None

Function: create_extractor_from_new_sorting(new_sorting)
  Docstring:
    None

Function: create_recording_from_old_extractor(oldapi_recording_extractor) -> 'OldToNewRecording'
  Docstring:
    None

Function: create_sorting_analyzer(sorting, recording, format='memory', folder=None, sparse=True, sparsity=None, return_scaled=True, overwrite=False, backend_options=None, **sparsity_kwargs) -> "'SortingAnalyzer'"
  Docstring:
    Create a SortingAnalyzer by pairing a Sorting and the corresponding Recording.
    
    This object will handle a list of AnalyzerExtension for all the post processing steps like: waveforms,
    templates, unit locations, spike locations, quality metrics ...
    
    This object will be also use used for plotting purpose.
    
    
    Parameters
    ----------
    sorting : Sorting
        The sorting object
    recording : Recording
        The recording object
    folder : str or Path or None, default: None
        The folder where analyzer is cached
    format : "memory | "binary_folder" | "zarr", default: "memory"
        The mode to store analyzer. If "folder", the analyzer is stored on disk in the specified folder.
        The "folder" argument must be specified in case of mode "folder".
        If "memory" is used, the analyzer is stored in RAM. Use this option carefully!
    sparse : bool, default: True
        If True, then a sparsity mask is computed using the `estimate_sparsity()` function using
        a few spikes to get an estimate of dense templates to create a ChannelSparsity object.
        Then, the sparsity will be propagated to all ResultExtention that handle sparsity (like wavforms, pca, ...)
        You can control `estimate_sparsity()` : all extra arguments are propagated to it (included job_kwargs)
    sparsity : ChannelSparsity or None, default: None
        The sparsity used to compute exensions. If this is given, `sparse` is ignored.
    return_scaled : bool, default: True
        All extensions that play with traces will use this global return_scaled : "waveforms", "noise_levels", "templates".
        This prevent return_scaled being differents from different extensions and having wrong snr for instance.
    overwrite: bool, default: False
        If True, overwrite the folder if it already exists.
    backend_options : dict | None, default: None
        Keyword arguments for the backend specified by format. It can contain the:
    
            * storage_options: dict | None (fsspec storage options)
            * saving_options: dict | None (additional saving options for creating and saving datasets, e.g. compression/filters for zarr)
    
    sparsity_kwargs : keyword arguments
    
    Returns
    -------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object
    
    Examples
    --------
    >>> import spikeinterface as si
    
    >>> # Create dense analyzer and save to disk with binary_folder format.
    >>> sorting_analyzer = si.create_sorting_analyzer(sorting, recording, format="binary_folder", folder="/path/to_my/result")
    
    >>> # Can be reload
    >>> sorting_analyzer = si.load_sorting_analyzer(folder="/path/to_my/result")
    
    >>> # Can run extension
    >>> sorting_analyzer = si.compute("unit_locations", ...)
    
    >>> # Can be copy to another format (extensions are propagated)
    >>> sorting_analyzer2 = sorting_analyzer.save_as(format="memory")
    >>> sorting_analyzer3 = sorting_analyzer.save_as(format="zarr", folder="/path/to_my/result.zarr")
    
    >>> # Can make a copy with a subset of units (extensions are propagated for the unit subset)
    >>> sorting_analyzer4 = sorting_analyzer.select_units(unit_ids=sorting.units_ids[:5], format="memory")
    >>> sorting_analyzer5 = sorting_analyzer.select_units(unit_ids=sorting.units_ids[:5], format="binary_folder", folder="/result_5units")
    
    Notes
    -----
    
    By default creating a SortingAnalyzer can be slow because the sparsity is estimated by default.
    In some situation, sparsity is not needed, so to make it fast creation, you need to turn
    sparsity off (or give external sparsity) like this.

Function: create_sorting_from_old_extractor(oldapi_sorting_extractor) -> 'OldToNewSorting'
  Docstring:
    None

Function: create_sorting_npz(num_seg, file_path)
  Docstring:
    Create a NPZ sorting file.
    
    Parameters
    ----------
    num_seg : int
        The number of segments.
    file_path : str | Path
        The file path to save the NPZ file.

Function: download_dataset(repo: 'str' = 'https://gin.g-node.org/NeuralEnsemble/ephy_testing_data', remote_path: 'str' = 'mearec/mearec_test_10s.h5', local_folder: 'Path | None' = None, update_if_exists: 'bool' = False) -> 'Path'
  Docstring:
    Function to download dataset from a remote repository using a combination of datalad and pooch.
    
    Pooch is designed to download single files from a remote repository.
    Because our datasets in gin sometimes point just to a folder, we still use datalad to download
    a list of all the files in the folder and then use pooch to download them one by one.
    
    Parameters
    ----------
    repo : str, default: "https://gin.g-node.org/NeuralEnsemble/ephy_testing_data"
        The repository to download the dataset from
    remote_path : str, default: "mearec/mearec_test_10s.h5"
        A specific subdirectory in the repository to download (e.g. Mearec, SpikeGLX, etc)
    local_folder : str, optional
        The destination folder / directory to download the dataset to.
        if None, then the path "get_global_dataset_folder()" / f{repo_name} is used (see `spikeinterface.core.globals`)
    update_if_exists : bool, default: False
        Forces re-download of the dataset if it already exists, default: False
    
    Returns
    -------
    Path
        The local path to the downloaded dataset
    
    Notes
    -----
    The reason we use pooch is because have had problems with datalad not being able to download
    data on windows machines. Especially in the CI.
    
    See https://handbook.datalad.org/en/latest/intro/windows.html

Function: ensure_chunk_size(recording, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, n_jobs=1, **other_kwargs)
  Docstring:
    "chunk_size" is the traces.shape[0] for each worker.
    
    Flexible chunk_size setter with 3 ways:
        * "chunk_size" : is the length in sample for each chunk independently of channel count and dtype.
        * "chunk_memory" : total memory per chunk per worker
        * "total_memory" : total memory over all workers.
    
    If chunk_size/chunk_memory/total_memory are all None then there is no chunk computing
    and the full trace is retrieved at once.
    
    Parameters
    ----------
    chunk_size : int or None
        size for one chunk per job
    chunk_memory : str or None
        must end with "k", "M", "G", etc for decimal units and "ki", "Mi", "Gi", etc for
        binary units. (e.g. "1k", "500M", "2G", "1ki", "500Mi", "2Gi")
    total_memory : str or None
        must end with "k", "M", "G", etc for decimal units and "ki", "Mi", "Gi", etc for
        binary units. (e.g. "1k", "500M", "2G", "1ki", "500Mi", "2Gi")
    chunk_duration : None or float or str
        Units are second if float.
        If str then the str must contain units(e.g. "1s", "500ms")

Function: ensure_n_jobs(recording, n_jobs=1)
  Docstring:
    None

Function: estimate_sparsity(sorting: 'BaseSorting', recording: 'BaseRecording', num_spikes_for_sparsity: 'int' = 100, ms_before: 'float' = 1.0, ms_after: 'float' = 2.5, method: "'radius' | 'best_channels' | 'closest_channels' | 'amplitude' | 'snr' | 'by_property' | 'ptp'" = 'radius', peak_sign: "'neg' | 'pos' | 'both'" = 'neg', radius_um: 'float' = 100.0, num_channels: 'int' = 5, threshold: 'float | None' = 5, amplitude_mode: "'extremum' | 'peak_to_peak'" = 'extremum', by_property: 'str | None' = None, noise_levels: 'np.ndarray | list | None' = None, **job_kwargs)
  Docstring:
    Estimate the sparsity without needing a SortingAnalyzer or Templates object.
    In case the sparsity method needs templates, they are computed on-the-fly.
    For the "snr" method, `noise_levels` must passed with the `noise_levels` argument.
    These can be computed with the `get_noise_levels()` function.
    
    Contrary to the previous implementation:
      * all units are computed in one read of recording
      * it doesn't require a folder
      * it doesn't consume too much memory
      * it uses internally the `estimate_templates_with_accumulator()` which is fast and parallel
    
    Note that the "energy" method is not supported because it requires a `SortingAnalyzer` object.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting
    recording : BaseRecording
        The recording
    num_spikes_for_sparsity : int, default: 100
        How many spikes per units to compute the sparsity
    ms_before : float, default: 1.0
        Cut out in ms before spike time
    ms_after : float, default: 2.5
        Cut out in ms after spike time
    noise_levels : np.array | None, default: None
        Noise levels required for the "snr" and "energy" methods. You can use the
        `get_noise_levels()` function to compute them.
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    sparsity : ChannelSparsity
        The estimated sparsity

Function: estimate_templates(recording: 'BaseRecording', spikes: 'np.ndarray', unit_ids: 'list | np.ndarray', nbefore: 'int', nafter: 'int', operator: 'str' = 'average', return_scaled: 'bool' = True, job_name=None, **job_kwargs)
  Docstring:
    Estimate dense templates with "average" or "median".
    If "average" internally estimate_templates_with_accumulator() is used to saved memory/
    
    Parameters
    ----------
    
    recording: BaseRecording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        Number of samples to cut out before a spike
    nafter: int
        Number of samples to cut out after a spike
    return_scaled: bool, default: True
        If True, the traces are scaled before averaging
    
    Returns
    -------
    templates_array: np.array
        The average templates with shape (num_units, nbefore + nafter, num_channels)

Function: estimate_templates_with_accumulator(recording: 'BaseRecording', spikes: 'np.ndarray', unit_ids: 'list | np.ndarray', nbefore: 'int', nafter: 'int', return_scaled: 'bool' = True, job_name=None, return_std: 'bool' = False, verbose: 'bool' = False, **job_kwargs) -> 'np.ndarray'
  Docstring:
    This is a fast implementation to compute template averages and standard deviations.
    This is useful to estimate sparsity without the need to allocate large waveform buffers.
    The mechanism is pretty simple: it accumulates and sums spike waveforms (and their squared)
    in-place per worker and per unit.
    Note that median and percentiles can't be computed with this method, because they don't support
    the accumulator implementation.
    
    Parameters
    ----------
    recording: BaseRecording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        Number of samples to cut out before a spike
    nafter: int
        Number of samples to cut out after a spike
    return_scaled: bool, default: True
        If True, the traces are scaled before averaging
    return_std: bool, default: False
        If True, the standard deviation is also computed.
    
    Returns
    -------
    templates_array: np.array
        The average templates with shape (num_units, nbefore + nafter, num_channels)

Function: extract_waveforms(recording, sorting, folder=None, mode='folder', precompute_template=('average',), ms_before=1.0, ms_after=2.0, max_spikes_per_unit=500, overwrite=None, return_scaled=True, dtype=None, sparse=True, sparsity=None, sparsity_temp_folder=None, num_spikes_for_sparsity=100, unit_batch_size=None, allow_unfiltered=None, use_relative_path=None, seed=None, load_if_exists=None, **kwargs)
  Docstring:
    This mock the extract_waveforms() in version <= 0.100 to not break old codes but using
    the SortingAnalyzer (version >0.100) internally.
    
    This return a MockWaveformExtractor object that mock the old WaveformExtractor

Function: extract_waveforms_to_buffers(recording, spikes, unit_ids, nbefore, nafter, mode='memmap', return_scaled=False, folder=None, dtype=None, sparsity_mask=None, copy=False, **job_kwargs)
  Docstring:
    Allocate buffers (memmap or or shared memory) and then distribute every waveform into theses buffers.
    
    Same as calling allocate_waveforms_buffers() and then distribute_waveforms_to_buffers().
    
    Important note: for the "shared_memory" mode arrays_info contains reference to
    the shared memmory buffer, this variable must be reference as long as arrays as used.
    And this variable is also returned.
    To avoid this a copy to non shared memmory can be perform at the end.
    
    Parameters
    ----------
    recording: recording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = Sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        N samples before spike
    nafter: int
        N samples after spike
    mode: "memmap" | "shared_memory", default: "memmap"
        The mode to use for the buffer
    return_scaled: bool, default: False
        Scale traces before exporting to buffer or not
    folder: str or path or None, default: None
        In case of memmap mode, folder to save npy files
    dtype: numpy.dtype, default: None
        dtype for waveforms buffer
    sparsity_mask: None or array of bool, default: None
        If not None shape must be must be (len(unit_ids), len(channel_ids))
    copy: bool, default: False
        If True, the output shared memory object is copied to a numpy standard array.
        If copy=False then arrays_info is also return. Please keep in mind that arrays_info
        need to be referenced as long as waveforms_by_units will be used otherwise it will be very hard to debug.
        Also when copy=False the SharedMemory will need to be unlink manually
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    waveforms_by_units: dict of arrays
        Arrays for all units (memmap or shared_memmep)
    
    arrays_info: dict of info
        Optionally return in case of shared_memory if copy=False.
        Dictionary to "construct" array in workers process (memmap file or sharemem info)

Function: fix_job_kwargs(runtime_job_kwargs)
  Docstring:
    None

Function: generate_ground_truth_recording(durations=[10.0], sampling_frequency=25000.0, num_channels=4, num_units=10, sorting=None, probe=None, generate_probe_kwargs={'num_columns': 2, 'xpitch': 20, 'ypitch': 20, 'contact_shapes': 'circle', 'contact_shape_params': {'radius': 6}}, templates=None, ms_before=1.0, ms_after=3.0, upsample_factor=None, upsample_vector=None, generate_sorting_kwargs={'firing_rates': 15, 'refractory_period_ms': 4.0}, noise_kwargs={'noise_levels': 5.0, 'strategy': 'on_the_fly'}, generate_unit_locations_kwargs={'margin_um': 10.0, 'minimum_z': 5.0, 'maximum_z': 50.0, 'minimum_distance': 20}, generate_templates_kwargs=None, dtype='float32', seed=None)
  Docstring:
    Generate a recording with spike given a probe+sorting+templates.
    
    Parameters
    ----------
    durations : list[float], default: [10.]
        Durations in seconds for all segments.
    sampling_frequency : float, default: 25000.0
        Sampling frequency.
    num_channels : int, default: 4
        Number of channels, not used when probe is given.
    num_units : int, default: 10
        Number of units,  not used when sorting is given.
    sorting : Sorting | None
        An external sorting object. If not provide, one is genrated.
    probe : Probe | None
        An external Probe object. If not provided a probe is generated using generate_probe_kwargs.
    generate_probe_kwargs : dict
        A dict to constuct the Probe using :py:func:`probeinterface.generate_multi_columns_probe()`.
    templates : np.array | None
        The templates of units.
        If None they are generated.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce jitter.
    ms_before : float, default: 1.5
        Cut out in ms before spike peak.
    ms_after : float, default: 3.0
        Cut out in ms after spike peak.
    upsample_factor : None | int, default: None
        A upsampling factor used only when templates are not provided.
    upsample_vector : np.array | None
        Optional the upsample_vector can given. This has the same shape as spike_vector
    generate_sorting_kwargs : dict
        When sorting is not provide, this dict is used to generated a Sorting.
    noise_kwargs : dict
        Dict used to generated the noise with NoiseGeneratorRecording.
    generate_unit_locations_kwargs : dict
        Dict used to generated template when template not provided.
    generate_templates_kwargs : dict
        Dict used to generated template when template not provided.
    dtype : np.dtype, default: "float32"
        The dtype of the recording.
    seed : int | None
        Seed for random initialization.
        If None a diffrent Recording is generated at every call.
        Note: even with None a generated recording keep internaly a seed to regenerate the same signal after dump/load.
    
    Returns
    -------
    recording : Recording
        The generated recording extractor.
    sorting : Sorting
        The generated sorting extractor.

Function: generate_recording(num_channels: 'int' = 2, sampling_frequency: 'float' = 30000.0, durations: 'list[float]' = [5.0, 2.5], set_probe: 'bool | None' = True, ndim: 'int | None' = 2, seed: 'int | None' = None) -> 'NumpySorting'
  Docstring:
    Generate a lazy recording object.
    Useful for testing API and algos.
    
    Parameters
    ----------
    num_channels : int, default: 2
        The number of channels in the recording.
    sampling_frequency : float, default: 30000. (in Hz)
        The sampling frequency of the recording, default: 30000.
    durations : list[float], default: [5.0, 2.5]
        The duration in seconds of each segment in the recording.
        The number of segments is determined by the length of this list.
    set_probe : bool, default: True
        If true, attaches probe to the returned `Recording`
    ndim : int, default: 2
        The number of dimensions of the probe, default: 2. Set to 3 to make 3 dimensional probe.
    seed : int | None, default: None
        A seed for the np.ramdom.default_rng function
    
    Returns
    -------
    NumpyRecording
        Returns a NumpyRecording object with the specified parameters.

Function: generate_recording_by_size(full_traces_size_GiB: 'float', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated') -> 'NoiseGeneratorRecording'
  Docstring:
    Generate a large lazy recording.
    This is a convenience wrapper around the NoiseGeneratorRecording class where only
    the size in GiB (NOT GB!) is specified.
    
    It is generated with 384 channels and a sampling frequency of 1 Hz. The duration is manipulted to
    produced the desired size.
    
    Seee GeneratorRecording for more details.
    
    Parameters
    ----------
    full_traces_size_GiB : float
        The size in gigabytes (GiB) of the recording.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and consume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computation (random)
    Returns
    -------
    GeneratorRecording
        A lazy random recording with the specified size.

Function: generate_snippets(nbefore=20, nafter=44, num_channels=2, wf_folder=None, sampling_frequency=30000.0, durations=[10.325, 3.5], set_probe=True, ndim=2, num_units=5, empty_units=None, **job_kwargs)
  Docstring:
    Generates a synthetic Snippets object.
    
    Parameters
    ----------
    nbefore : int, default: 20
        Number of samples before the peak.
    nafter : int, default: 44
        Number of samples after the peak.
    num_channels : int, default: 2
        Number of channels.
    wf_folder : str | Path | None, default: None
        Optional folder to save the waveform snippets. If None, snippets are in memory.
    sampling_frequency : float, default: 30000.0
        The sampling frequency of the snippets in Hz.
    ndim : int, default: 2
        The number of dimensions of the probe.
    num_units : int, default: 5
        The number of units.
    empty_units : list | None, default: None
        A list of units that will have no spikes.
    durations : List[float], default: [10.325, 3.5]
        The duration in seconds of each segment in the recording.
        The number of segments is determined by the length of this list.
    set_probe : bool, default: True
        If true, attaches probe to the returned snippets object
    **job_kwargs : dict, default: None
        Job keyword arguments for `snippets_from_sorting`
    
    Returns
    -------
    snippets : NumpySnippets
        The snippets object.
    sorting : NumpySorting
        The associated sorting object.

Function: generate_sorting(num_units=5, sampling_frequency=30000.0, durations=[10.325, 3.5], firing_rates=3.0, empty_units=None, refractory_period_ms=4.0, add_spikes_on_borders=False, num_spikes_per_border=3, border_size_samples=20, seed=None)
  Docstring:
    Generates sorting object with random firings.
    
    Parameters
    ----------
    num_units : int, default: 5
        Number of units.
    sampling_frequency : float, default: 30000.0
        The sampling frequency of the recording in Hz.
    durations : list, default: [10.325, 3.5]
        Duration of each segment in s.
    firing_rates : float, default: 3.0
        The firing rate of each unit (in Hz).
    empty_units : list, default: None
        List of units that will have no spikes. (used for testing mainly).
    refractory_period_ms : float, default: 4.0
        The refractory period in ms
    add_spikes_on_borders : bool, default: False
        If True, spikes will be added close to the borders of the segments.
        This is for testing some post-processing functions when they have
        to deal with border spikes.
    num_spikes_per_border : int, default: 3
        The number of spikes to add close to the borders of the segments.
    border_size_samples : int, default: 20
        The size of the border in samples to add border spikes.
    seed : int, default: None
        The random seed.
    
    Returns
    -------
    sorting : NumpySorting
        The sorting object.

Function: generate_templates(channel_locations, units_locations, sampling_frequency, ms_before, ms_after, seed=None, dtype='float32', upsample_factor=None, unit_params=None, mode='ellipsoid')
  Docstring:
    Generate some templates from the given channel positions and neuron positions.
    
    The implementation is very naive : it generates a mono channel waveform using generate_single_fake_waveform()
    and duplicates this same waveform on all channel given a simple decay law per unit.
    
    
    Parameters
    ----------
    
    channel_locations : np.ndarray
        Channel locations.
    units_locations : np.ndarray
        Must be 3D.
    sampling_frequency : float
        Sampling frequency.
    ms_before : float
        Cut out in ms before spike peak.
    ms_after : float
        Cut out in ms after spike peak.
    seed : int | None
        A seed for random.
    dtype : numpy.dtype, default: "float32"
        Templates dtype
    upsample_factor : int | None, default: None
        If not None then template are generated upsampled by this factor.
        Then a new dimention (axis=3) is added to the template with intermediate inter sample representation.
        This allow easy random jitter by choising a template this new dim
    unit_params : dict[np.array] | dict[float] | dict[tuple] | None, default: None
        An optional dict containing parameters per units.
        Keys are parameter names:
    
            * "alpha": amplitude of the action potential in a.u. (default range: (6'000-9'000))
            * "depolarization_ms": the depolarization interval in ms (default range: (0.09-0.14))
            * "repolarization_ms": the repolarization interval in ms (default range: (0.5-0.8))
            * "recovery_ms": the recovery interval in ms (default range: (1.0-1.5))
            * "positive_amplitude": the positive amplitude in a.u. (default range: (0.05-0.15)) (negative is always -1)
            * "smooth_ms": the gaussian smooth in ms (default range: (0.03-0.07))
            * "spatial_decay": the spatial constant (default range: (20-40))
            * "propagation_speed": mimic a propagation delay with a kind of a "speed" (default range: (250., 350.)).
    
        Values can be:
            * array of the same length of units
            * scalar, then an array is created
            * tuple, then this difine a range for random values.
    mode : "ellipsoid" | "sphere", default: "ellipsoid"
        Method used to calculate the distance between unit and channel location.
        Ellipsoid injects some anisotropy dependent on unit shape, sphere is equivalent
        to Euclidean distance.
    
    mode : "sphere" | "ellipsoid", default: "ellipsoid"
        Mode for how to calculate distances
    
    
    Returns
    -------
    templates: np.array
        The template array with shape
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor) if upsample_factor is not None

Function: get_available_analyzer_extensions()
  Docstring:
    Get all extensions that can be computed by the analyzer.

Function: get_best_job_kwargs()
  Docstring:
    Gives best possible job_kwargs for the platform.
    Currently this function  is from developer experience, but may be adapted in the future.

Function: get_channel_distances(recording)
  Docstring:
    Distance between channel pairs

Function: get_chunk_with_margin(rec_segment, start_frame, end_frame, channel_indices, margin, add_zeros=False, add_reflect_padding=False, window_on_margin=False, dtype=None)
  Docstring:
    Helper to get chunk with margin
    
    The margin is extracted from the recording when possible. If
    at the edge of the recording, no margin is used unless one
    of `add_zeros` or `add_reflect_padding` is True. In the first
    case zero padding is used, in the second case np.pad is called
    with mod="reflect".

Function: get_closest_channels(recording, channel_ids=None, num_channels=None)
  Docstring:
    Get closest channels + distances
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to get closest channels
    channel_ids : list
        List of channels ids to compute there near neighborhood
    num_channels : int, default: None
        Maximum number of neighborhood channels to return
    
    Returns
    -------
    closest_channels_inds : array (2d)
        Closest channel indices in ascending order for each channel id given in input
    dists : array (2d)
        Distance in ascending order for each channel id given in input

Function: get_default_analyzer_extension_params(extension_name: 'str')
  Docstring:
    Get the default params for an extension.
    
    Parameters
    ----------
    extension_name : str
        The extension name
    
    Returns
    -------
    default_params : dict
        The default parameters for the extension

Function: get_default_zarr_compressor(clevel: 'int' = 5)
  Docstring:
    Return default Zarr compressor object for good preformance in int16
    electrophysiology data.
    
    cname: zstd (zstandard)
    clevel: 5
    shuffle: BITSHUFFLE
    
    Parameters
    ----------
    clevel : int, default: 5
        Compression level (higher -> more compressed).
        Minimum 1, maximum 9. By default 5
    
    Returns
    -------
    Blosc.compressor
        The compressor object that can be used with the save to zarr function

Function: get_global_dataset_folder()
  Docstring:
    Get the global dataset folder.

Function: get_global_job_kwargs()
  Docstring:
    Get the global job kwargs.

Function: get_global_tmp_folder()
  Docstring:
    Get the global path temporary folder.

Function: get_noise_levels(recording: "'BaseRecording'", return_scaled: 'bool' = True, method: "Literal['mad', 'std']" = 'mad', force_recompute: 'bool' = False, random_slices_kwargs: 'dict' = {}, **kwargs) -> 'np.ndarray'
  Docstring:
    Estimate noise for each channel using MAD methods.
    You can use standard deviation with `method="std"`
    
    Internally it samples some chunk across segment.
    And then, it uses the MAD estimator (more robust than STD) or the STD on each chunk.
    Finally the average of all MAD/STD values is performed.
    
    The result is cached in a property of the recording, so that the next call on the same
    recording will use the cached result unless `force_recompute=True`.
    
    Parameters
    ----------
    
    recording : BaseRecording
        The recording extractor to get noise levels
    return_scaled : bool
        If True, returned noise levels are scaled to uV
    method : "mad" | "std", default: "mad"
        The method to use to estimate noise levels
    force_recompute : bool
        If True, noise levels are recomputed even if they are already stored in the recording extractor
    random_slices_kwargs : dict
        Options transmited to  get_random_recording_slices(), please read documentation from this
        function for more details.
    
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    noise_levels : array
        Noise levels for each channel

Function: get_random_data_chunks(recording, return_scaled=False, concatenated=True, **random_slices_kwargs)
  Docstring:
    Extract random chunks across segments.
    
    Internally, it uses `get_random_recording_slices()` and retrieves the traces chunk as a list
    or a concatenated unique array.
    
    Please read `get_random_recording_slices()` for more details on parameters.
    
    
    Parameters
    ----------
    recording : BaseRecording
        The recording to get random chunks from
    return_scaled : bool, default: False
        If True, returned chunks are scaled to uV
    num_chunks_per_segment : int, default: 20
        Number of chunks per segment
    concatenated : bool, default: True
        If True chunk are concatenated along time axis
    **random_slices_kwargs : dict
        Options transmited to  get_random_recording_slices(), please read documentation from this
        function for more details.
    
    Returns
    -------
    chunk_list : np.array | list of np.array
        Array of concatenate chunks per segment

Function: get_template_amplitudes(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum', return_scaled: 'bool' = True, abs_value: 'bool' = True)
  Docstring:
    Get amplitude per channel for each unit.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum" : take the peak value (max or min depending on `peak_sign`)
        * "at_index" : take value at `nbefore` index
        * "peak_to_peak" : take the peak-to-peak amplitude
    return_scaled : bool, default True
        The amplitude is scaled or not.
    abs_value : bool = True
        Whether the extremum amplitude should be returned as an absolute value or not
    
    Returns
    -------
    peak_values : dict
        Dictionary with unit ids as keys and template amplitudes as values

Function: get_template_extremum_amplitude(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'at_index', abs_value: 'bool' = True)
  Docstring:
    Computes amplitudes on the best channel.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum": take the peak value (max or min depending on `peak_sign`)
        * "at_index": take value at `nbefore` index
        * "peak_to_peak": take the peak-to-peak amplitude
    abs_value : bool = True
        Whether the extremum amplitude should be returned as an absolute value or not
    
    
    Returns
    -------
    amplitudes : dict
        Dictionary with unit ids as keys and amplitudes as values

Function: get_template_extremum_channel(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum', outputs: "'id' | 'index'" = 'id')
  Docstring:
    Compute the channel with the extremum peak for each unit.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum" : take the peak value (max or min depending on `peak_sign`)
        * "at_index" : take value at `nbefore` index
        * "peak_to_peak" : take the peak-to-peak amplitude
    outputs : "id" | "index", default: "id"
        * "id" : channel id
        * "index" : channel index
    
    Returns
    -------
    extremum_channels : dict
        Dictionary with unit ids as keys and extremum channels (id or index based on "outputs")
        as values

Function: get_template_extremum_channel_peak_shift(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg')
  Docstring:
    In some situations spike sorters could return a spike index with a small shift related to the waveform peak.
    This function estimates and return these alignment shifts for the mean template.
    This function is internally used by `compute_spike_amplitudes()` to accurately retrieve the spike amplitudes.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    
    Returns
    -------
    shifts : dict
        Dictionary with unit ids as keys and shifts as values

Function: inject_some_duplicate_units(sorting, num=4, max_shift=5, ratio=None, seed=None)
  Docstring:
    Inject some duplicate units in a sorting.
    The peak shift can be control in a range.
    
    Parameters
    ----------
    sorting :
        Original sorting.
    num : int, default: 4
        Number of injected units.
    max_shift : int, default: 5
        range of the shift in sample.
    ratio : float | None, default: None
        Proportion of original spike in the injected units.
    seed : int | None, default: None
        Random seed for creating unit peak shifts.
    
    Returns
    -------
    sorting_with_dup: Sorting
        A sorting with more units.

Function: inject_some_split_units(sorting, split_ids: 'list', num_split=2, output_ids=False, seed=None)
  Docstring:
    Inject some split units in a sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        Original sorting.
    split_ids : list
        List of unit_ids to split.
    num_split : int, default: 2
        Number of split units.
    output_ids : bool, default: False
        If True, return the new unit_ids.
    seed : int, default: None
        Random seed.
    
    Returns
    -------
    sorting_with_split : NumpySorting
        A sorting with split units.
    other_ids : dict
        The dictionary with the split unit_ids. Returned only if output_ids is True.

Class: inject_templates
  Docstring:
    Class for creating a recording based on spike timings and templates.
    Can be just the templates or can add to an already existing recording.
    
    Parameters
    ----------
    sorting : BaseSorting
        Sorting object containing all the units and their spike train.
    templates : np.ndarray[n_units, n_samples, n_channels] | np.ndarray[n_units, n_samples, n_oversampling]
        Array containing the templates to inject for all the units.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce sampling jitter.
    nbefore : list[int] | int | None, default: None
        The number of samples before the peak of the template to align the spike.
        If None, will default to the highest peak.
    amplitude_factor : list[float] | float | None, default: None
        The amplitude of each spike for each unit.
        Can be None (no scaling).
        Can be scalar all spikes have the same factor (certainly useless).
        Can be a vector with same shape of spike_vector of the sorting.
    parent_recording : BaseRecording | None, default: None
        The recording over which to add the templates.
        If None, will default to traces containing all 0.
    num_samples : list[int] | int | None, default: None
        The number of samples in the recording per segment.
        You can use int for mono-segment objects.
    upsample_vector : np.array | None, default: None.
        When templates is 4d we can simulate a jitter.
        Optional the upsample_vector is the jitter index with a number per spike in range 0-templates.shape[3].
    check_borders : bool, default: False
        Checks if the border of the templates are zero.
    
    Returns
    -------
    injected_recording: InjectTemplatesRecording
        The recording with the templates injected.
  __init__(self, sorting: 'BaseSorting', templates: 'np.ndarray', nbefore: 'list[int] | int | None' = None, amplitude_factor: 'list[float] | float | None' = None, parent_recording: 'BaseRecording | None' = None, num_samples: 'list[int] | int | None' = None, upsample_vector: 'np.array | None' = None, check_borders: 'bool' = False) -> 'None'

Function: is_set_global_dataset_folder() -> 'bool'
  Docstring:
    Check if the global path dataset folder has been manually set.

Function: is_set_global_tmp_folder() -> 'bool'
  Docstring:
    Check if the global path temporary folder have been manually set.

Function: load(file_or_folder_or_dict, **kwargs) -> 'BaseExtractor | SortingAnalyzer | Motion | Template'
  Docstring:
    General load function to load a SpikeInterface object.
    
    The function can load:
        - a `Recording` or `Sorting` object from:
            * dictionary
            * json file
            * pkl file
            * binary folder (after `extractor.save(..., format='binary_folder')`)
            * zarr folder (after `extractor.save(..., format='zarr')`)
            * remote zarr folder
        - a `SortingAnalyzer` object from:
            * binary folder
            * zarr folder
            * remote zarr folder
            * WaveformExtractor folder (backward compatibility for v<0.101)
        - a `Motion` object from:
           * folder (after `Motion.save(folder)`)
        - a `Templates` object from:
           * zarr folder (after `Templates.add_templates_to_zarr_group()`)
           * dictionary (after `Templates.to_dict()`)
    
    Parameters
    ----------
    file_or_folder_or_dict : dictionary or folder or file (json, pickle)
        The file path, folder path, or dictionary to load the Recording, Sorting, or SortingAnalyzer from
    kwargs : keyword arguments for various objects, including
        * base_folder: str | Path | bool
            The base folder to make relative paths absolute. Only used to load Recording/Sorting objects.
            If True and file_or_folder_or_dict is a file, the parent folder of the file is used.
        * load_extensions: bool, default: True
            If True, the SortingAnalyzer extensions are loaded. Only used to load SortingAnalyzer objects.
        * storage_options: dict | None, default: None
            The storage options to use when loading the object. Only used to load Recording/Sorting objects.
        * backend_options: dict | None, default: None
            The backend options to use when loading the object. Only used to load SortingAnalyzer objects.
            The dictionary can contain the following keys:
            - storage_options: dict | None (fsspec storage options)
            - saving_options: dict | None (additional saving options for creating and saving datasets)
    
    Returns
    -------
    spikeinterface object: Recording or Sorting or SortingAnalyzer or Motion or Templates
        The loaded spikeinterface object

Function: load_extractor(file_or_folder_or_dict, base_folder=None) -> 'BaseExtractor'
  Docstring:
    None

Function: load_sorting_analyzer(folder, load_extensions=True, format='auto', backend_options=None) -> "'SortingAnalyzer'"
  Docstring:
    Load a SortingAnalyzer object from disk.
    
    Parameters
    ----------
    folder : str or Path
        The folder / zarr folder where the analyzer is stored. If the folder is a remote path stored in the cloud,
        the backend_options can be used to specify credentials. If the remote path is not accessible,
        and backend_options is not provided, the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    load_extensions : bool, default: True
        Load all extensions or not.
    format : "auto" | "binary_folder" | "zarr"
        The format of the folder.
    backend_options : dict | None, default: None
        The backend options for the backend.
        The dictionary can contain the following keys:
    
            * storage_options: dict | None (fsspec storage options)
            * saving_options: dict | None (additional saving options for creating and saving datasets)
    
    Returns
    -------
    sorting_analyzer : SortingAnalyzer
        The loaded SortingAnalyzer

Function: load_sorting_analyzer_or_waveforms(folder, sorting=None)
  Docstring:
    Load a SortingAnalyzer from either a newly saved SortingAnalyzer folder or an old WaveformExtractor folder.
    
    Parameters
    ----------
    folder: str | Path
        The folder to the sorting analyzer or waveform extractor
    sorting: BaseSorting | None, default: None
        The sorting object to instantiate with the SortingAnalyzer (only used for old WaveformExtractor)
    
    Returns
    -------
    sorting_analyzer: SortingAnalyzer
        The returned SortingAnalyzer.

Function: load_waveforms(folder, with_recording: 'bool' = True, sorting: 'Optional[BaseSorting]' = None, output='MockWaveformExtractor') -> 'MockWaveformExtractor | SortingAnalyzer'
  Docstring:
    This read an old WaveformsExtactor folder (folder or zarr) and convert it into a SortingAnalyzer or MockWaveformExtractor.
    
    It also mimic the old load_waveforms by opening a Sortingresult folder and return a MockWaveformExtractor.
    This later behavior is usefull to no break old code like this in versio >=0.101
    
    >>> # In this example we is a MockWaveformExtractor that behave the same as before
    >>> we = extract_waveforms(..., folder="/my_we")
    >>> we = load_waveforms("/my_we")
    >>> templates = we.get_all_templates()
    
    Parameters
    ----------
    folder: str | Path
        The folder to the waveform extractor (binary or zarr)
    with_recording: bool
        For back-compatibility, ignored
    sorting: BaseSorting | None, default: None
        The sorting object to instantiate with the Waveforms
    output: "MockWaveformExtractor" | "SortingAnalyzer", default: "MockWaveformExtractor"
        The output format
    
    Returns
    -------
    waveforms_or_analyzer: MockWaveformExtractor | SortingAnalyzer
        The returned MockWaveformExtractor or SortingAnalyzer

Class: noise_generator_recording
  Docstring:
    A lazy recording that generates white noise samples if and only if `get_traces` is called.
    
    This done by tiling small noise chunk.
    
    2 strategies to be reproducible across different start/end frame calls:
      * "tile_pregenerated": pregenerate a small noise block and tile it depending the start_frame/end_frame
      * "on_the_fly": generate on the fly small noise chunk and tile then. seed depend also on the noise block.
    
    
    Parameters
    ----------
    num_channels : int
        The number of channels.
    sampling_frequency : float
        The sampling frequency of the recorder.
    durations : list[float]
        The durations of each segment in seconds. Note that the length of this list is the number of segments.
    noise_levels : float | np.array, default: 1.0
        Std of the white noise (if an array, defined by per channels)
    cov_matrix : np.array | None, default: None
        The covariance matrix of the noise
    dtype : np.dtype | str | None, default: "float32"
        The dtype of the recording. Note that only np.float32 and np.float64 are supported.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and cusume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computaion (random)
    noise_block_size : int, default: 30000
        Size in sample of noise block.
    
    Notes
    -----
    If modifying this function, ensure that only one call to malloc is made per call get_traces to
    maintain the optimized memory profile.
  __init__(self, num_channels: 'int', sampling_frequency: 'float', durations: 'list[float]', noise_levels: 'float | np.array' = 1.0, cov_matrix: 'np.array | None' = None, dtype: 'np.dtype | str | None' = 'float32', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated', noise_block_size: 'int' = 30000)

Function: normal_pdf(x, mu: 'float' = 0.0, sigma: 'float' = 1.0)
  Docstring:
    Manual implementation of the Normal distribution pdf (probability density function).
    It is about 8 to 10 times faster than scipy.stats.norm.pdf().
    
    Parameters
    ----------
    x: scalar or array
        The x-axis
    mu: float, default: 0.0
        The mean of the Normal distribution.
    sigma: float, default: 1.0
        The standard deviation of the Normal distribution.
    
    Returns
    -------
    normal_pdf: scalar or array (same type as 'x')
        The pdf of the Normal distribution for the given x-axis.

Function: order_channels_by_depth(recording, channel_ids=None, dimensions=('x', 'y'), flip=False)
  Docstring:
    Order channels by depth, by first ordering the x-axis, and then the y-axis.
    
    Parameters
    ----------
    recording : BaseRecording
        The input recording
    channel_ids : list/array or None
        If given, a subset of channels to order locations for
    dimensions : str, tuple, or list, default: ('x', 'y')
        If str, it needs to be 'x', 'y', 'z'.
        If tuple or list, it sorts the locations in two dimensions using lexsort.
        This approach is recommended since there is less ambiguity
    flip : bool, default: False
        If flip is False then the order is bottom first (starting from tip of the probe).
        If flip is True then the order is upper first.
    
    Returns
    -------
    order_f : np.array
        Array with sorted indices
    order_r : np.array
        Array with indices to revert sorting

Function: random_spikes_selection(sorting: 'BaseSorting', num_samples: 'int | None' = None, method: 'str' = 'uniform', max_spikes_per_unit: 'int' = 500, margin_size: 'int | None' = None, seed: 'int | None' = None)
  Docstring:
    This replaces `select_random_spikes_uniformly()`.
    Random spikes selection of spike across per units.
    Can optionally avoid spikes on segment borders if
    margin_size is not None.
    
    Parameters
    ----------
    sorting: BaseSorting
        The sorting object
    num_samples: list of int
        The number of samples per segment.
        Can be retrieved from recording with
        num_samples = [recording.get_num_samples(seg_index) for seg_index in range(recording.get_num_segments())]
    method: "uniform"  | "all", default: "uniform"
        The method to use. Only "uniform" is implemented for now
    max_spikes_per_unit: int, default: 500
        The number of spikes per units
    margin_size: None | int, default: None
        A margin on each border of segments to avoid border spikes
    seed: None | int, default: None
        A seed for random generator
    
    Returns
    -------
    random_spikes_indices: np.array
        Selected spike indices coresponding to the sorting spike vector.

Class: read_binary
  Docstring:
    RecordingExtractor for a binary format
    
    Parameters
    ----------
    file_paths : str or Path or list
        Path to the binary file
    sampling_frequency : float
        The sampling frequency
    num_channels : int
        Number of channels
    dtype : str or dtype
        The dtype of the binary file
    time_axis : int, default: 0
        The axis of the time dimension
    t_starts : None or list of float, default: None
        Times in seconds of the first sample for each segment
    channel_ids : list, default: None
        A list of channel ids
    file_offset : int, default: 0
        Number of bytes in the file to offset by during memmap instantiation.
    gain_to_uV : float or array-like, default: None
        The gain to apply to the traces
    offset_to_uV : float or array-like, default: None
        The offset to apply to the traces
    is_filtered : bool or None, default: None
        If True, the recording is assumed to be filtered. If None, is_filtered is not set.
    
    Notes
    -----
    When both num_channels and num_chan are provided, `num_channels` is used and `num_chan` is ignored.
    
    Returns
    -------
    recording : BinaryRecordingExtractor
        The recording Extractor
  __init__(self, file_paths, sampling_frequency, dtype, num_channels: 'int', t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: write_recording(recording, file_paths, dtype=None, **job_kwargs)
    Docstring:
      Save the traces of a recording extractor in binary .dat format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor object to be saved in .dat format
      file_paths : str
          The path to the file.
      dtype : dtype, default: None
          Type of the saved data
      **job_kwargs : keyword arguments for parallel processing:
          * chunk_duration or chunk_size or chunk_memory or total_memory
              - chunk_size : int
                  Number of samples per chunk
              - chunk_memory : str
                  Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
              - total_memory : str
                  Total memory usage (e.g. "500M", "2G")
              - chunk_duration : str or float or None
                  Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
          * n_jobs : int | float
              Number of jobs to use. With -1 the number of jobs is the same as number of cores.
              Using a float between 0 and 1 will use that fraction of the total cores.
          * progress_bar : bool
              If True, a progress bar is printed
          * mp_context : "fork" | "spawn" | None, default: None
              Context for multiprocessing. It can be None, "fork" or "spawn".
              Note that "fork" is only safely available on LINUX systems

Class: read_binary_folder
  Docstring:
    BinaryFolderRecording is an internal format used in spikeinterface.
    It is a BinaryRecordingExtractor + metadata contained in a folder.
    
    It is created with the function: `recording.save(format="binary", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    recording : BinaryFolderRecording
        The recording
  __init__(self, folder_path)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary

Class: read_npy_snippets
  Docstring:
    Dead simple and super light format based on the NPY numpy format.
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_paths, sampling_frequency, channel_ids=None, nbefore=None, gain_to_uV=None, offset_to_uV=None)
  Method: write_snippets(snippets, file_paths, dtype=None)
    Docstring:
      Save snippet extractor in binary .npy format.
      
      Parameters
      ----------
      snippets: SnippetsExtractor
          The snippets extractor object to be saved in .npy format
      file_paths: str
          The paths to the files.
      dtype: None, str or dtype
          Typecode or data-type to which the snippets will be cast.
      {}

Class: read_npy_snippets_folder
  Docstring:
    NpyFolderSnippets is an internal format used in spikeinterface.
    It is a NpySnippetsExtractor + metadata contained in a folder.
    
    It is created with the function: `snippets.save(format="npy", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
        The path to the folder
    
    Returns
    -------
    snippets : NpyFolderSnippets
        The snippets
  __init__(self, folder_path)

Class: read_npz_folder
  Docstring:
    NpzFolderSorting is the old internal format used in spikeinterface (<=0.98.0)
    
    This a folder that contains:
    
      * "sorting_cached.npz" file in the NpzSortingExtractor format
      * "npz.json" which the json description of NpzSortingExtractor
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="npz_folder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    sorting : NpzFolderSorting
        The sorting
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: read_npz_sorting
  Docstring:
    Dead simple and super light format based on the NPZ numpy format.
    https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: read_numpy_sorting_folder
  Docstring:
    NumpyFolderSorting is the new internal format used in spikeinterface (>=0.99.0) for caching sorting objects.
    
    It is a simple folder that contains:
      * a file "spike.npy" (numpy format) with all flatten spikes (using sorting.to_spike_vector())
      * a "numpysorting_info.json" containing sampling_frequency, unit_ids and num_segments
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="numpy_folder")`
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Function: read_python(path)
  Docstring:
    Parses python scripts in a dictionary
    
    Parameters
    ----------
    path: str or Path
        Path to file to parse
    
    Returns
    -------
    metadata:
        dictionary containing parsed file

Function: read_zarr(folder_path: 'str | Path', storage_options: 'dict | None' = None) -> 'ZarrRecordingExtractor | ZarrSortingExtractor'
  Docstring:
    Read recording or sorting from a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root file
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    
    Returns
    -------
    extractor : ZarrExtractor
        The loaded extractor

Function: reset_global_job_kwargs()
  Docstring:
    Reset the global job kwargs.

Function: reset_global_tmp_folder()
  Docstring:
    Generate a new global path temporary folder.

Class: select_segment_recording
  Docstring:
    Return a new recording with a subset of segments from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, recording: 'BaseRecording', segment_indices: 'int | list[int]')

Class: select_segment_sorting
  Docstring:
    Return a new sorting with a single segment from a multi-segment sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The multi-segment sorting
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, sorting: 'BaseSorting', segment_indices: 'int | list[int]')

Function: set_global_dataset_folder(folder)
  Docstring:
    Set the global dataset folder.

Function: set_global_job_kwargs(**job_kwargs)
  Docstring:
    Set the global job kwargs.
    
    Parameters
    ----------
    
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

Function: set_global_tmp_folder(folder)
  Docstring:
    Set the global path temporary folder.

Function: snippets_from_sorting(recording, sorting, nbefore=20, nafter=44, wf_folder=None, **job_kwargs)
  Docstring:
    Extract snippets from recording and sorting instances
    
    Parameters
    ----------
    recording: BaseRecording
        The recording to get snippets from
    sorting: BaseSorting
        The sorting to get the frames from
    nbefore: int
        N samples before spike
    nafter: int
        N samples after spike
    wf_folder: None, str or path
        Folder to save npy files, if None shared_memory will be used to extract waveforms
    Returns
    -------
    snippets: NumpySnippets
        Snippets extractor created

Function: spike_vector_to_spike_trains(spike_vector: 'list[np.array]', unit_ids: 'np.array') -> 'dict[dict[str, np.array]]'
  Docstring:
    Computes all spike trains for all units/segments from a spike vector list.
    
    Internally calls numba if numba is installed.
    
    Parameters
    ----------
    spike_vector: list[np.ndarray]
        List of spike vectors optained with sorting.to_spike_vector(concatenated=False)
    unit_ids: np.array
        Unit ids
    
    Returns
    -------
    spike_trains: dict[dict]:
        A dict containing, for each segment, the spike trains of all units
        (as a dict: unit_id --> spike_train).

Function: split_job_kwargs(mixed_kwargs)
  Docstring:
    This function splits mixed kwargs into job_kwargs and specific_kwargs.
    This can be useful for some function with generic signature
    mixing specific and job kwargs.

Function: split_recording(recording: 'BaseRecording')
  Docstring:
    Return a list of mono-segment recordings from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    
    Returns
    -------
    recording_list
        A list of mono-segment recordings

Class: split_sorting
  Docstring:
    Splits a sorting with a single segment to multiple segments
    based on the given list of recordings (must be in order)
    
    Parameters
    ----------
    parent_sorting : BaseSorting
        Sorting with a single segment (e.g. from sorting concatenated recording)
    recording_or_recording_list : list of recordings, ConcatenateSegmentRecording, or None, default: None
        If list of recordings, uses the lengths of those recordings to split the sorting
        into smaller segments
        If ConcatenateSegmentRecording, uses the associated list of recordings to split
        the sorting into smaller segments
        If None, looks for the recording associated with the sorting
  __init__(self, parent_sorting: 'BaseSorting', recording_or_recording_list=None)

Function: synthesize_random_firings(num_units=20, sampling_frequency=30000.0, duration=60, refractory_period_ms=4.0, firing_rates=3.0, add_shift_shuffle=False, seed=None)
  Docstring:
    "
    Generate some spiketrain with random firing for one segment.
    
    Parameters
    ----------
    num_units : int, default: 20
        Number of units.
    sampling_frequency : float, default: 30000.0
        Sampling rate in Hz.
    duration : float, default: 60
        Duration of the segment in seconds.
    refractory_period_ms : float
        Refractory period in ms.
    firing_rates : float or list[float]
        The firing rate of each unit (in Hz).
        If float, all units will have the same firing rate.
    add_shift_shuffle : bool, default: False
        Optionally add a small shuffle on half of the spikes to make the autocorrelogram less flat.
    seed : int, default: None
        Seed for the generator.
    
    Returns
    -------
    times: np.array
        Concatenated and sorted times vector.
    labels: np.array
        Concatenated and sorted label vector.

Function: synthetize_spike_train_bad_isi(duration, baseline_rate, num_violations, violation_delta=1e-05)
  Docstring:
    Create a spike train. Has uniform inter-spike intervals, except where isis violations occur.
    
    Parameters
    ----------
    duration : float
        Length of simulated recording (in seconds).
    baseline_rate : float
        Firing rate for "true" spikes.
    num_violations : int
        Number of contaminating spikes.
    violation_delta : float, default: 1e-5
        Temporal offset of contaminating spikes (in seconds).
    
    Returns
    -------
    spike_train : np.array
        Array of monotonically increasing spike times.

Function: write_binary_recording(recording: "'BaseRecording'", file_paths: 'list[Path | str] | Path | str', dtype: 'np.typing.DTypeLike' = None, add_file_extension: 'bool' = True, byte_offset: 'int' = 0, auto_cast_uint: 'bool' = True, verbose: 'bool' = False, **job_kwargs)
  Docstring:
    Save the trace of a recording extractor in several binary .dat format.
    
    Note :
        time_axis is always 0 (contrary to previous version.
        to get time_axis=1 (which is a bad idea) use `write_binary_recording_file_handle()`
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor object to be saved in .dat format
    file_path : str or list[str]
        The path to the file.
    dtype : dtype or None, default: None
        Type of the saved data
    add_file_extension, bool, default: True
        If True, and  the file path does not end in "raw", "bin", or "dat" then "raw" is added as an extension.
    byte_offset : int, default: 0
        Offset in bytes for the binary file (e.g. to write a header). This is useful in case you want to append data
        to an existing file where you wrote a header or other data before.
    auto_cast_uint : bool, default: True
        If True, unsigned integers are automatically cast to int if the specified dtype is signed
        .. deprecated:: 0.103, use the `unsigned_to_signed` function instead.
    verbose : bool
        This is the verbosity of the ChunkRecordingExecutor
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

Function: write_python(path, dict)
  Docstring:
    Saves python dictionary to file
    
    Parameters
    ----------
    path: str or Path
        Path to save file
    dict: dict
        dictionary to save

Function: write_to_h5_dataset_format(recording, dataset_path, segment_index, save_path=None, file_handle=None, time_axis=0, single_axis=False, dtype=None, chunk_size=None, chunk_memory='500M', verbose=False, auto_cast_uint=True, return_scaled=False)
  Docstring:
    Save the traces of a recording extractor in an h5 dataset.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor object to be saved in .dat format
    dataset_path : str
        Path to dataset in the h5 file (e.g. "/dataset")
    segment_index : int
        index of segment
    save_path : str, default: None
        The path to the file.
    file_handle : file handle, default: None
        The file handle to dump data. This can be used to append data to an header. In case file_handle is given,
        the file is NOT closed after writing the binary data.
    time_axis : 0 or 1, default: 0
        If 0 then traces are transposed to ensure (nb_sample, nb_channel) in the file.
        If 1, the traces shape (nb_channel, nb_sample) is kept in the file.
    single_axis : bool, default: False
        If True, a single-channel recording is saved as a one dimensional array
    dtype : dtype, default: None
        Type of the saved data
    chunk_size : None or int, default: None
        Number of chunks to save the file in. This avoids too much memory consumption for big files.
        If None and "chunk_memory" is given, the file is saved in chunks of "chunk_memory" MB
    chunk_memory : None or str, default: "500M"
        Chunk size in bytes must end with "k", "M" or "G"
    verbose : bool, default: False
        If True, output is verbose (when chunks are used)
    auto_cast_uint : bool, default: True
        If True, unsigned integers are automatically cast to int if the specified dtype is signed
    return_scaled : bool, default: False
        If True and the recording has scaling (gain_to_uV and offset_to_uV properties),
        traces are dumped to uV
