API for module: spikeinterface.comparison

Class: CollisionGTComparison
  Docstring:
    This class is an extension of GroundTruthComparison by focusing to benchmark spike in collision.
    
    This class needs maintenance and need a bit of refactoring.
    
    Parameters
    ----------
    gt_sorting : BaseSorting
        The first sorting for the comparison
    collision_lag : float, default 2.0
        Collision lag in ms.
    tested_sorting : BaseSorting
        The second sorting for the comparison
    nbins : int, default : 11
        Number of collision bins
    **kwargs : dict
        Keyword arguments for `GroundTruthComparison`
  __init__(self, gt_sorting, tested_sorting, collision_lag=2.0, nbins=11, **kwargs)
  Method: compute_all_pair_collision_bins(self)
    Docstring:
      None
  Method: compute_collision_by_similarity(self, similarity_matrix, unit_ids=None, good_only=False, min_accuracy=0.9)
    Docstring:
      None
  Method: detect_gt_collision(self)
    Docstring:
      None
  Method: get_label_count_per_collision_bins(self, gt_unit_id1, gt_unit_id2, bins)
    Docstring:
      None
  Method: get_label_for_collision(self, gt_unit_id1, gt_unit_id2)
    Docstring:
      None

Class: CorrelogramGTComparison
  Docstring:
    This class is an extension of GroundTruthComparison by focusing
    to benchmark correlation reconstruction.
    
    This class needs maintenance and need a bit of refactoring.
    
    Parameters
    ----------
    gt_sorting : BaseSorting
        The first sorting for the comparison
    tested_sorting : BaseSorting
        The second sorting for the comparison
    bin_ms : float, default: 1.0
        Size of bin for correlograms
    window_ms : float, default: 100.0
        The window around the spike to compute the correlation in ms.
    well_detected_score : float, default: 0.8
        Agreement score above which units are well detected
    **kwargs : dict
        Keyword arguments for `GroundTruthComparison`
  __init__(self, gt_sorting, tested_sorting, window_ms=100.0, bin_ms=1.0, well_detected_score=0.8, **kwargs)
  Method: compute_correlogram_by_similarity(self, similarity_matrix, window_ms=None)
    Docstring:
      None
  Method: compute_correlograms(self)
    Docstring:
      None
  Method: error(self, window_ms=None)
    Docstring:
      None

Class: GroundTruthComparison
  Docstring:
    Compares a sorter to a ground truth.
    
    This class can:
      * compute a "match between gt_sorting and tested_sorting
      * compute optionally the score label (TP, FN, CL, FP) for each spike
      * count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe
        GroundTruthComparison.count
      * compute the confusion matrix .get_confusion_matrix()
      * compute some performance metric with several strategy based on
        the count score by unit
      * count well detected units
      * count false positive detected units
      * count redundant units
      * count overmerged units
      * summary all this
    
    
    Parameters
    ----------
    gt_sorting : BaseSorting
        The first sorting for the comparison
    tested_sorting : BaseSorting
        The second sorting for the comparison
    gt_name : str, default: None
        The name of sorter 1
    tested_name : : str, default: None
        The name of sorter 2
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes.
        This means that two spikes are considered simultaneous if they are within `delta_time` of each other or
        mathematically abs(spike1_time - spike2_time) <= delta_time.
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    redundant_score : float, default: 0.2
        Agreement score above which units are redundant
    overmerged_score : float, default: 0.2
        Agreement score above which units can be overmerged
    well_detected_score : float, default: 0.8
        Agreement score above which units are well detected
    exhaustive_gt : bool, default: False
        Tell if the ground true is "exhaustive" or not. In other world if the
        GT have all possible units. It allows more performance measurement.
        For instance, MEArec simulated dataset have exhaustive_gt=True
    match_mode : "hungarian" | "best", default: "hungarian"
        The method to match units
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    compute_labels : bool, default: False
        If True, labels are computed at instantiation
    compute_misclassifications : bool, default: False
        If True, misclassifications are computed at instantiation
    verbose : bool, default: False
        If True, output is verbose
    
    Returns
    -------
    sorting_comparison : SortingComparison
        The SortingComparison object
  __init__(self, gt_sorting: 'BaseSorting', tested_sorting: 'BaseSorting', gt_name: 'str | None' = None, tested_name: 'str | None' = None, delta_time: 'float' = 0.4, match_score: 'float' = 0.5, well_detected_score: 'float' = 0.8, redundant_score: 'float' = 0.2, overmerged_score: 'float' = 0.2, chance_score: 'float' = 0.1, exhaustive_gt: 'bool' = False, agreement_method: 'str' = 'count', match_mode: 'str' = 'hungarian', compute_labels: 'bool' = False, compute_misclassifications: 'bool' = False, verbose: 'bool' = False)
  Method: count_bad_units(self)
    Docstring:
      See get_bad_units
  Method: count_false_positive_units(self, redundant_score=None)
    Docstring:
      See get_false_positive_units().
      
      Parameters
      ----------
      redundant_score : float | None, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
  Method: count_overmerged_units(self, overmerged_score=None)
    Docstring:
      See get_overmerged_units().
      
      Parameters
      ----------
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".
  Method: count_redundant_units(self, redundant_score=None)
    Docstring:
      See get_redundant_units().
      
       Parameters
       ----------
       redundant_score : float, default: None
           The agreement score below which tested units
           are counted as "false positive"" (and not "redundant").
  Method: count_units_categories(self, well_detected_score=None, overmerged_score=None, redundant_score=None)
    Docstring:
      None
  Method: count_well_detected_units(self, well_detected_score)
    Docstring:
      Count how many well detected units.
      kwargs are the same as get_well_detected_units.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
  Method: get_bad_units(self)
    Docstring:
      Return units list of "bad units".
      
      "bad units" are defined as units in tested that are not
      in the best match list of GT units.
      
      So it is the union of "false positive units" + "redundant units".
      
      Need exhaustive_gt=True
  Method: get_confusion_matrix(self)
    Docstring:
      Computes the confusion matrix.
      
      Returns
      -------
      confusion_matrix : pandas.DataFrame
          The confusion matrix
  Method: get_false_positive_units(self, redundant_score=None)
    Docstring:
      Return units list of "false positive units" from tested_sorting.
      
      "false positive units" are defined as units in tested that
      are not matched at all in GT units.
      
      Need exhaustive_gt=True
      
      Parameters
      ----------
      redundant_score : float, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
  Method: get_labels1(self, unit_id)
    Docstring:
      None
  Method: get_labels2(self, unit_id)
    Docstring:
      None
  Method: get_overmerged_units(self, overmerged_score=None)
    Docstring:
      Return "overmerged units"
      
      "overmerged units" are defined as units in tested
      that match more than one GT unit with an agreement score larger than overmerged_score.
      
      Parameters
      ----------
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".
  Method: get_performance(self, method='by_unit', output='pandas')
    Docstring:
      Get performance rate with several method:
        * "raw_count" : just render the raw count table
        * "by_unit" : render perf as rate unit by unit of the GT
        * "pooled_with_average" : compute rate unit by unit and average
      
      Parameters
      ----------
      method : "by_unit" | "pooled_with_average", default: "by_unit"
          The method to compute performance
      output : "pandas" | "dict", default: "pandas"
          The output format
      
      Returns
      -------
      perf : pandas dataframe/series (or dict)
          dataframe/series (based on "output") with performance entries
  Method: get_redundant_units(self, redundant_score=None)
    Docstring:
      Return "redundant units"
      
      "redundant units" are defined as units in tested
      that match a GT units with a big agreement score
      but it is not the best match.
      In other world units in GT that detected twice or more.
      
      Parameters
      ----------
      redundant_score : float, default: None
          The agreement score above which tested units
          are counted as "redundant" (and not "false positive" ).
  Method: get_well_detected_units(self, well_detected_score=None)
    Docstring:
      Return units list of "well detected units" from tested_sorting.
      
      "well detected units" are defined as units in tested that
      are well matched to GT units.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
  Method: print_performance(self, method='pooled_with_average')
    Docstring:
      Print performance with the selected method
      
      Parameters
      ----------
      method : "by_unit" | "pooled_with_average", default: "pooled_with_average"
          The method to compute performance
  Method: print_summary(self, well_detected_score=None, redundant_score=None, overmerged_score=None)
    Docstring:
      Print a global performance summary that depend on the context:
        * exhaustive= True/False
        * how many gt units (one or several)
      
      This summary mix several performance metrics.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
      redundant_score : float, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".

Class: GroundTruthStudy
  Docstring:
    None
  __init__(self, study_folder)

Class: HybridSpikesRecording
  Docstring:
    Class for creating a hybrid recording where additional spikes are added
    to already existing units.
    
    Parameters
    ----------
    wvf_extractor : WaveformExtractor
        The waveform extractor object of the existing recording.
    injected_sorting : BaseSorting | None
        Additional spikes to inject.
        If None, will generate it.
    max_injected_per_unit : int
        If injected_sorting=None, the max number of spikes per unit
        that is allowed to be injected.
    unit_ids : list[int] | None
        unit_ids to take in the wvf_extractor for spikes injection.
    injected_rate : float
        If injected_sorting=None, the max fraction of spikes per
        unit that is allowed to be injected.
    refractory_period_ms : float
        If injected_sorting=None, the injected spikes need to respect
        this refractory period.
    injected_sorting_folder : str | Path | None
        If given, the injected sorting is saved to this folder.
        It must be specified if injected_sorting is None or not serializable to file.
    
    Returns
    -------
    hybrid_spikes_recording : HybridSpikesRecording:
        The recording containing units with real and hybrid spikes.
  __init__(self, wvf_extractor, injected_sorting: 'Union[BaseSorting, None]' = None, unit_ids: 'Union[List[int], None]' = None, max_injected_per_unit: 'int' = 1000, injected_rate: 'float' = 0.05, refractory_period_ms: 'float' = 1.5, injected_sorting_folder: 'Union[str, Path, None]' = None) -> 'None'

Class: HybridUnitsRecording
  Docstring:
    Class for creating a hybrid recording where additional units are added
    to an existing recording.
    
    Parameters
    ----------
    parent_recording : BaseRecording
        Existing recording to add on top of.
    templates : np.ndarray[n_units, n_samples, n_channels]
        Array containing the templates to inject for all the units.
    injected_sorting : BaseSorting | None:
        The sorting for the injected units.
        If None, will be generated using the following parameters.
    nbefore : list[int] | int | None
        Where is the center of the template for each unit?
        If None, will default to the highest peak.
    firing_rate : float
        The firing rate of the injected units (in Hz).
    amplitude_factor : np.ndarray | None:
        The amplitude factor for each spike.
        If None, will be generated as a gaussian centered at 1.0 and with an std of amplitude_std.
    amplitude_std : float
        The standard deviation of the amplitude (centered at 1.0).
    refractory_period_ms : float
        The refractory period of the injected spike train (in ms).
    injected_sorting_folder : str | Path | None
        If given, the injected sorting is saved to this folder.
        It must be specified if injected_sorting is None or not serialisable to file.
    seed : int, default: None
        Random seed for amplitude_factor
    
    Returns
    -------
    hybrid_units_recording : HybridUnitsRecording
        The recording containing real and hybrid units.
  __init__(self, parent_recording: 'BaseRecording', templates: 'np.ndarray', injected_sorting: 'Union[BaseSorting, None]' = None, nbefore: 'Union[List[int], int, None]' = None, firing_rate: 'float' = 10, amplitude_factor: 'Union[np.ndarray, None]' = None, amplitude_std: 'float' = 0.0, refractory_period_ms: 'float' = 2.0, injected_sorting_folder: 'Union[str, Path, None]' = None, seed=None)

Class: MultiSortingComparison
  Docstring:
    Compares multiple spike sorting outputs based on spike trains.
    
    - Pair-wise comparisons are made
    - An agreement graph is built based on the agreement score
    
    It allows to return a consensus-based sorting extractor with the `get_agreement_sorting()` method.
    
    Parameters
    ----------
    sorting_list : list
        List of sorting extractor objects to be compared
    name_list : list, default: None
        List of spike sorter names. If not given, sorters are named as "sorter0", "sorter1", "sorter2", etc.
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    n_jobs : int, default: -1
       Number of cores to use in parallel. Uses all available if -1
    spiketrain_mode : "union" | "intersection", default: "union"
        Mode to extract agreement spike trains:
            - "union" : spike trains are the union between the spike trains of the best matching two sorters
            - "intersection" : spike trains are the intersection between the spike trains of the
               best matching two sorters
    verbose : bool, default: False
        If True, output is verbose
    do_matching : bool, default: True
        If True, the comparison is done when the `MultiSortingComparison` is initialized
    
    Returns
    -------
    multi_sorting_comparison : MultiSortingComparison
        MultiSortingComparison object with the multiple sorter comparison
  __init__(self, sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, agreement_method='count', n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)
  Method: get_agreement_sorting(self, minimum_agreement_count=1, minimum_agreement_count_only=False)
    Docstring:
      Returns AgreementSortingExtractor with units with a "minimum_matching" agreement.
      
      Parameters
      ----------
      minimum_agreement_count : int
          Minimum number of matches among sorters to include a unit.
      minimum_agreement_count_only : bool
          If True, only units with agreement == "minimum_matching" are included.
          If False, units with an agreement >= "minimum_matching" are included
      
      Returns
      -------
      agreement_sorting : AgreementSortingExtractor
          The output AgreementSortingExtractor
  Method: load_from_folder(folder_path)
    Docstring:
      None
  Method: save_to_folder(self, save_folder)
    Docstring:
      None

Class: MultiTemplateComparison
  Docstring:
    Compares multiple waveform extractors using template similarity.
    
    - Pair-wise comparisons are made
    - An agreement graph is built based on the agreement score
    
    Parameters
    ----------
    waveform_list : list
        List of waveform extractor objects to be compared
    name_list : list, default: None
        List of session names. If not given, sorters are named as "sess0", "sess1", "sess2", etc.
    match_score : float, default: 0.8
        Minimum agreement score to match units
    chance_score : float, default: 0.3
        Minimum agreement score to for a possible match
    verbose : bool, default: False
        If True, output is verbose
    do_matching : bool, default: True
        If True, the comparison is done when the `MultiSortingComparison` is initialized
    support : "dense" | "union" | "intersection", default: "union"
        The support to compute the similarity matrix.
    num_shifts : int, default: 0
        Number of shifts to use to shift templates to maximize similarity.
    similarity_method : "cosine" | "l1" | "l2", default: "cosine"
        Method for the similarity matrix.
    
    Returns
    -------
    multi_template_comparison : MultiTemplateComparison
        MultiTemplateComparison object with the multiple template comparisons
  __init__(self, waveform_list, name_list=None, match_score=0.8, chance_score=0.3, verbose=False, similarity_method='cosine', support='union', num_shifts=0, do_matching=True)

Class: SymmetricSortingComparison
  Docstring:
    Compares two spike sorter outputs.
    
    - Spike trains are matched based on their agreement scores
    - Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike
      train 1), false positives 2 (FP from spike train 2), misclassifications (CL)
    
    It also allows to get confusion matrix and agreement fraction, false positive fraction and
    false negative fraction.
    
    Parameters
    ----------
    sorting1 : BaseSorting
        The first sorting for the comparison
    sorting2 : BaseSorting
        The second sorting for the comparison
    sorting1_name : str, default: None
        The name of sorter 1
    sorting2_name : : str, default: None
        The name of sorter 2
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    verbose : bool, default: False
        If True, output is verbose
    
    Returns
    -------
    sorting_comparison : SortingComparison
        The SortingComparison object
  __init__(self, sorting1: 'BaseSorting', sorting2: 'BaseSorting', sorting1_name: 'str | None' = None, sorting2_name: 'str | None' = None, delta_time: 'float' = 0.4, match_score: 'float' = 0.5, chance_score: 'float' = 0.1, agreement_method: 'str' = 'count', verbose: 'bool' = False)
  Method: get_agreement_fraction(self, unit1=None, unit2=None)
    Docstring:
      None
  Method: get_best_unit_match1(self, unit1)
    Docstring:
      None
  Method: get_best_unit_match2(self, unit2)
    Docstring:
      None
  Method: get_matching(self)
    Docstring:
      None
  Method: get_matching_event_count(self, unit1, unit2)
    Docstring:
      None
  Method: get_matching_unit_list1(self, unit1)
    Docstring:
      None
  Method: get_matching_unit_list2(self, unit2)
    Docstring:
      None

Class: TemplateComparison
  Docstring:
    Compares units from different sessions based on template similarity
    
    Parameters
    ----------
    sorting_analyzer_1 : SortingAnalyzer
        The first SortingAnalyzer to get templates to compare.
    sorting_analyzer_2 : SortingAnalyzer
        The second SortingAnalyzer to get templates to compare.
    unit_ids1 : list, default: None
        List of units from sorting_analyzer_1 to compare.
    unit_ids2 : list, default: None
        List of units from sorting_analyzer_2 to compare.
    name1 : str, default: "sess1"
        Name of first session.
    name2 : str, default: "sess2"
        Name of second session.
    similarity_method : "cosine" | "l1" | "l2", default: "cosine"
        Method for the similarity matrix.
    support : "dense" | "union" | "intersection", default: "union"
        The support to compute the similarity matrix.
    num_shifts : int, default: 0
        Number of shifts to use to shift templates to maximize similarity.
    verbose : bool, default: False
        If True, output is verbose.
    chance_score : float, default: 0.3
         Minimum agreement score to for a possible match
    match_score : float, default: 0.7
        Minimum agreement score to match units
    
    
    Returns
    -------
    comparison : TemplateComparison
        The output TemplateComparison object.
  __init__(self, sorting_analyzer_1, sorting_analyzer_2, name1=None, name2=None, unit_ids1=None, unit_ids2=None, match_score=0.7, chance_score=0.3, similarity_method='cosine', support='union', num_shifts=0, verbose=False)

Class: compare_multiple_sorters
  Docstring:
    Compares multiple spike sorting outputs based on spike trains.
    
    - Pair-wise comparisons are made
    - An agreement graph is built based on the agreement score
    
    It allows to return a consensus-based sorting extractor with the `get_agreement_sorting()` method.
    
    Parameters
    ----------
    sorting_list : list
        List of sorting extractor objects to be compared
    name_list : list, default: None
        List of spike sorter names. If not given, sorters are named as "sorter0", "sorter1", "sorter2", etc.
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    n_jobs : int, default: -1
       Number of cores to use in parallel. Uses all available if -1
    spiketrain_mode : "union" | "intersection", default: "union"
        Mode to extract agreement spike trains:
            - "union" : spike trains are the union between the spike trains of the best matching two sorters
            - "intersection" : spike trains are the intersection between the spike trains of the
               best matching two sorters
    verbose : bool, default: False
        If True, output is verbose
    do_matching : bool, default: True
        If True, the comparison is done when the `MultiSortingComparison` is initialized
    
    Returns
    -------
    multi_sorting_comparison : MultiSortingComparison
        MultiSortingComparison object with the multiple sorter comparison
  __init__(self, sorting_list, name_list=None, delta_time=0.4, match_score=0.5, chance_score=0.1, agreement_method='count', n_jobs=-1, spiketrain_mode='union', verbose=False, do_matching=True)
  Method: get_agreement_sorting(self, minimum_agreement_count=1, minimum_agreement_count_only=False)
    Docstring:
      Returns AgreementSortingExtractor with units with a "minimum_matching" agreement.
      
      Parameters
      ----------
      minimum_agreement_count : int
          Minimum number of matches among sorters to include a unit.
      minimum_agreement_count_only : bool
          If True, only units with agreement == "minimum_matching" are included.
          If False, units with an agreement >= "minimum_matching" are included
      
      Returns
      -------
      agreement_sorting : AgreementSortingExtractor
          The output AgreementSortingExtractor
  Method: load_from_folder(folder_path)
    Docstring:
      None
  Method: save_to_folder(self, save_folder)
    Docstring:
      None

Class: compare_multiple_templates
  Docstring:
    Compares multiple waveform extractors using template similarity.
    
    - Pair-wise comparisons are made
    - An agreement graph is built based on the agreement score
    
    Parameters
    ----------
    waveform_list : list
        List of waveform extractor objects to be compared
    name_list : list, default: None
        List of session names. If not given, sorters are named as "sess0", "sess1", "sess2", etc.
    match_score : float, default: 0.8
        Minimum agreement score to match units
    chance_score : float, default: 0.3
        Minimum agreement score to for a possible match
    verbose : bool, default: False
        If True, output is verbose
    do_matching : bool, default: True
        If True, the comparison is done when the `MultiSortingComparison` is initialized
    support : "dense" | "union" | "intersection", default: "union"
        The support to compute the similarity matrix.
    num_shifts : int, default: 0
        Number of shifts to use to shift templates to maximize similarity.
    similarity_method : "cosine" | "l1" | "l2", default: "cosine"
        Method for the similarity matrix.
    
    Returns
    -------
    multi_template_comparison : MultiTemplateComparison
        MultiTemplateComparison object with the multiple template comparisons
  __init__(self, waveform_list, name_list=None, match_score=0.8, chance_score=0.3, verbose=False, similarity_method='cosine', support='union', num_shifts=0, do_matching=True)

Class: compare_sorter_to_ground_truth
  Docstring:
    Compares a sorter to a ground truth.
    
    This class can:
      * compute a "match between gt_sorting and tested_sorting
      * compute optionally the score label (TP, FN, CL, FP) for each spike
      * count by unit of GT the total of each (TP, FN, CL, FP) into a Dataframe
        GroundTruthComparison.count
      * compute the confusion matrix .get_confusion_matrix()
      * compute some performance metric with several strategy based on
        the count score by unit
      * count well detected units
      * count false positive detected units
      * count redundant units
      * count overmerged units
      * summary all this
    
    
    Parameters
    ----------
    gt_sorting : BaseSorting
        The first sorting for the comparison
    tested_sorting : BaseSorting
        The second sorting for the comparison
    gt_name : str, default: None
        The name of sorter 1
    tested_name : : str, default: None
        The name of sorter 2
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes.
        This means that two spikes are considered simultaneous if they are within `delta_time` of each other or
        mathematically abs(spike1_time - spike2_time) <= delta_time.
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    redundant_score : float, default: 0.2
        Agreement score above which units are redundant
    overmerged_score : float, default: 0.2
        Agreement score above which units can be overmerged
    well_detected_score : float, default: 0.8
        Agreement score above which units are well detected
    exhaustive_gt : bool, default: False
        Tell if the ground true is "exhaustive" or not. In other world if the
        GT have all possible units. It allows more performance measurement.
        For instance, MEArec simulated dataset have exhaustive_gt=True
    match_mode : "hungarian" | "best", default: "hungarian"
        The method to match units
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    compute_labels : bool, default: False
        If True, labels are computed at instantiation
    compute_misclassifications : bool, default: False
        If True, misclassifications are computed at instantiation
    verbose : bool, default: False
        If True, output is verbose
    
    Returns
    -------
    sorting_comparison : SortingComparison
        The SortingComparison object
  __init__(self, gt_sorting: 'BaseSorting', tested_sorting: 'BaseSorting', gt_name: 'str | None' = None, tested_name: 'str | None' = None, delta_time: 'float' = 0.4, match_score: 'float' = 0.5, well_detected_score: 'float' = 0.8, redundant_score: 'float' = 0.2, overmerged_score: 'float' = 0.2, chance_score: 'float' = 0.1, exhaustive_gt: 'bool' = False, agreement_method: 'str' = 'count', match_mode: 'str' = 'hungarian', compute_labels: 'bool' = False, compute_misclassifications: 'bool' = False, verbose: 'bool' = False)
  Method: count_bad_units(self)
    Docstring:
      See get_bad_units
  Method: count_false_positive_units(self, redundant_score=None)
    Docstring:
      See get_false_positive_units().
      
      Parameters
      ----------
      redundant_score : float | None, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
  Method: count_overmerged_units(self, overmerged_score=None)
    Docstring:
      See get_overmerged_units().
      
      Parameters
      ----------
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".
  Method: count_redundant_units(self, redundant_score=None)
    Docstring:
      See get_redundant_units().
      
       Parameters
       ----------
       redundant_score : float, default: None
           The agreement score below which tested units
           are counted as "false positive"" (and not "redundant").
  Method: count_units_categories(self, well_detected_score=None, overmerged_score=None, redundant_score=None)
    Docstring:
      None
  Method: count_well_detected_units(self, well_detected_score)
    Docstring:
      Count how many well detected units.
      kwargs are the same as get_well_detected_units.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
  Method: get_bad_units(self)
    Docstring:
      Return units list of "bad units".
      
      "bad units" are defined as units in tested that are not
      in the best match list of GT units.
      
      So it is the union of "false positive units" + "redundant units".
      
      Need exhaustive_gt=True
  Method: get_confusion_matrix(self)
    Docstring:
      Computes the confusion matrix.
      
      Returns
      -------
      confusion_matrix : pandas.DataFrame
          The confusion matrix
  Method: get_false_positive_units(self, redundant_score=None)
    Docstring:
      Return units list of "false positive units" from tested_sorting.
      
      "false positive units" are defined as units in tested that
      are not matched at all in GT units.
      
      Need exhaustive_gt=True
      
      Parameters
      ----------
      redundant_score : float, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
  Method: get_labels1(self, unit_id)
    Docstring:
      None
  Method: get_labels2(self, unit_id)
    Docstring:
      None
  Method: get_overmerged_units(self, overmerged_score=None)
    Docstring:
      Return "overmerged units"
      
      "overmerged units" are defined as units in tested
      that match more than one GT unit with an agreement score larger than overmerged_score.
      
      Parameters
      ----------
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".
  Method: get_performance(self, method='by_unit', output='pandas')
    Docstring:
      Get performance rate with several method:
        * "raw_count" : just render the raw count table
        * "by_unit" : render perf as rate unit by unit of the GT
        * "pooled_with_average" : compute rate unit by unit and average
      
      Parameters
      ----------
      method : "by_unit" | "pooled_with_average", default: "by_unit"
          The method to compute performance
      output : "pandas" | "dict", default: "pandas"
          The output format
      
      Returns
      -------
      perf : pandas dataframe/series (or dict)
          dataframe/series (based on "output") with performance entries
  Method: get_redundant_units(self, redundant_score=None)
    Docstring:
      Return "redundant units"
      
      "redundant units" are defined as units in tested
      that match a GT units with a big agreement score
      but it is not the best match.
      In other world units in GT that detected twice or more.
      
      Parameters
      ----------
      redundant_score : float, default: None
          The agreement score above which tested units
          are counted as "redundant" (and not "false positive" ).
  Method: get_well_detected_units(self, well_detected_score=None)
    Docstring:
      Return units list of "well detected units" from tested_sorting.
      
      "well detected units" are defined as units in tested that
      are well matched to GT units.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
  Method: print_performance(self, method='pooled_with_average')
    Docstring:
      Print performance with the selected method
      
      Parameters
      ----------
      method : "by_unit" | "pooled_with_average", default: "pooled_with_average"
          The method to compute performance
  Method: print_summary(self, well_detected_score=None, redundant_score=None, overmerged_score=None)
    Docstring:
      Print a global performance summary that depend on the context:
        * exhaustive= True/False
        * how many gt units (one or several)
      
      This summary mix several performance metrics.
      
      Parameters
      ----------
      well_detected_score : float, default: None
          The agreement score above which tested units
          are counted as "well detected".
      redundant_score : float, default: None
          The agreement score below which tested units
          are counted as "false positive"" (and not "redundant").
      overmerged_score : float, default: None
          Tested units with 2 or more agreement scores above "overmerged_score"
          are counted as "overmerged".

Function: compare_spike_trains(spiketrain1, spiketrain2, delta_frames=10)
  Docstring:
    Compares 2 spike trains.
    
    Note:
      * The first spiketrain is supposed to be the ground truth.
      * this implementation do not count a TP when more than one spike
        is present around the same time in spiketrain2.
    
    Parameters
    ----------
    spiketrain1, spiketrain2 : numpy.array
        Times of spikes for the 2 spike trains.
    
    Returns
    -------
    lab_st1, lab_st2 : numpy.array
        Label of score for each spike

Class: compare_templates
  Docstring:
    Compares units from different sessions based on template similarity
    
    Parameters
    ----------
    sorting_analyzer_1 : SortingAnalyzer
        The first SortingAnalyzer to get templates to compare.
    sorting_analyzer_2 : SortingAnalyzer
        The second SortingAnalyzer to get templates to compare.
    unit_ids1 : list, default: None
        List of units from sorting_analyzer_1 to compare.
    unit_ids2 : list, default: None
        List of units from sorting_analyzer_2 to compare.
    name1 : str, default: "sess1"
        Name of first session.
    name2 : str, default: "sess2"
        Name of second session.
    similarity_method : "cosine" | "l1" | "l2", default: "cosine"
        Method for the similarity matrix.
    support : "dense" | "union" | "intersection", default: "union"
        The support to compute the similarity matrix.
    num_shifts : int, default: 0
        Number of shifts to use to shift templates to maximize similarity.
    verbose : bool, default: False
        If True, output is verbose.
    chance_score : float, default: 0.3
         Minimum agreement score to for a possible match
    match_score : float, default: 0.7
        Minimum agreement score to match units
    
    
    Returns
    -------
    comparison : TemplateComparison
        The output TemplateComparison object.
  __init__(self, sorting_analyzer_1, sorting_analyzer_2, name1=None, name2=None, unit_ids1=None, unit_ids2=None, match_score=0.7, chance_score=0.3, similarity_method='cosine', support='union', num_shifts=0, verbose=False)

Class: compare_two_sorters
  Docstring:
    Compares two spike sorter outputs.
    
    - Spike trains are matched based on their agreement scores
    - Individual spikes are labelled as true positives (TP), false negatives (FN), false positives 1 (FP from spike
      train 1), false positives 2 (FP from spike train 2), misclassifications (CL)
    
    It also allows to get confusion matrix and agreement fraction, false positive fraction and
    false negative fraction.
    
    Parameters
    ----------
    sorting1 : BaseSorting
        The first sorting for the comparison
    sorting2 : BaseSorting
        The second sorting for the comparison
    sorting1_name : str, default: None
        The name of sorter 1
    sorting2_name : : str, default: None
        The name of sorter 2
    delta_time : float, default: 0.4
        Number of ms to consider coincident spikes
    match_score : float, default: 0.5
        Minimum agreement score to match units
    chance_score : float, default: 0.1
        Minimum agreement score to for a possible match
    agreement_method : "count" | "distance", default: "count"
        The method to compute agreement scores. The "count" method computes agreement scores from spike counts.
        The "distance" method computes agreement scores from spike time distance functions.
    verbose : bool, default: False
        If True, output is verbose
    
    Returns
    -------
    sorting_comparison : SortingComparison
        The SortingComparison object
  __init__(self, sorting1: 'BaseSorting', sorting2: 'BaseSorting', sorting1_name: 'str | None' = None, sorting2_name: 'str | None' = None, delta_time: 'float' = 0.4, match_score: 'float' = 0.5, chance_score: 'float' = 0.1, agreement_method: 'str' = 'count', verbose: 'bool' = False)
  Method: get_agreement_fraction(self, unit1=None, unit2=None)
    Docstring:
      None
  Method: get_best_unit_match1(self, unit1)
    Docstring:
      None
  Method: get_best_unit_match2(self, unit2)
    Docstring:
      None
  Method: get_matching(self)
    Docstring:
      None
  Method: get_matching_event_count(self, unit1, unit2)
    Docstring:
      None
  Method: get_matching_unit_list1(self, unit1)
    Docstring:
      None
  Method: get_matching_unit_list2(self, unit2)
    Docstring:
      None

Function: compute_agreement_score(num_matches: 'int', num1: 'int', num2: 'int') -> 'float'
  Docstring:
    Computes agreement score.
    
    Parameters
    ----------
    num_matches : int
        Number of matches
    num1 : int
        Number of events in spike train 1
    num2 : int
        Number of events in spike train 2
    
    Returns
    -------
    score : float
        Agreement score

Function: compute_performance(count_score)
  Docstring:
    This compute perf formula.
    this trick here is that it works both on pd.Series and pd.Dataframe
    line by line.
    This it is internally used by perf by psiketrain and poll_with_sum.
    
    https://en.wikipedia.org/wiki/Sensitivity_and_specificity
    
    Note :
      * we don't have TN because it do not make sens here.
      * "accuracy" = "tp_rate" because TN=0
      * "recall" = "sensitivity"

Function: count_match_spikes(times1, all_times2, delta_frames)
  Docstring:
    Computes matching spikes between one spike train and a list of others.
    
    Parameters
    ----------
    times1 : array
        Spike train 1 frames
    all_times2 : list of array
        List of spike trains from sorting 2
    
    Returns
    -------
    matching_events_count : list
        List of counts of matching events

Function: count_matching_events(times1, times2: 'np.ndarray | list', delta: 'int' = 10)
  Docstring:
    Counts matching events.
    
    Parameters
    ----------
    times1 : list
        List of spike train 1 frames
    times2 : list
        List of spike train 2 frames
    delta : int
        Number of frames for considering matching events
    
    Returns
    -------
    matching_count : int
        Number of matching events

Class: create_hybrid_spikes_recording
  Docstring:
    Class for creating a hybrid recording where additional spikes are added
    to already existing units.
    
    Parameters
    ----------
    wvf_extractor : WaveformExtractor
        The waveform extractor object of the existing recording.
    injected_sorting : BaseSorting | None
        Additional spikes to inject.
        If None, will generate it.
    max_injected_per_unit : int
        If injected_sorting=None, the max number of spikes per unit
        that is allowed to be injected.
    unit_ids : list[int] | None
        unit_ids to take in the wvf_extractor for spikes injection.
    injected_rate : float
        If injected_sorting=None, the max fraction of spikes per
        unit that is allowed to be injected.
    refractory_period_ms : float
        If injected_sorting=None, the injected spikes need to respect
        this refractory period.
    injected_sorting_folder : str | Path | None
        If given, the injected sorting is saved to this folder.
        It must be specified if injected_sorting is None or not serializable to file.
    
    Returns
    -------
    hybrid_spikes_recording : HybridSpikesRecording:
        The recording containing units with real and hybrid spikes.
  __init__(self, wvf_extractor, injected_sorting: 'Union[BaseSorting, None]' = None, unit_ids: 'Union[List[int], None]' = None, max_injected_per_unit: 'int' = 1000, injected_rate: 'float' = 0.05, refractory_period_ms: 'float' = 1.5, injected_sorting_folder: 'Union[str, Path, None]' = None) -> 'None'

Class: create_hybrid_units_recording
  Docstring:
    Class for creating a hybrid recording where additional units are added
    to an existing recording.
    
    Parameters
    ----------
    parent_recording : BaseRecording
        Existing recording to add on top of.
    templates : np.ndarray[n_units, n_samples, n_channels]
        Array containing the templates to inject for all the units.
    injected_sorting : BaseSorting | None:
        The sorting for the injected units.
        If None, will be generated using the following parameters.
    nbefore : list[int] | int | None
        Where is the center of the template for each unit?
        If None, will default to the highest peak.
    firing_rate : float
        The firing rate of the injected units (in Hz).
    amplitude_factor : np.ndarray | None:
        The amplitude factor for each spike.
        If None, will be generated as a gaussian centered at 1.0 and with an std of amplitude_std.
    amplitude_std : float
        The standard deviation of the amplitude (centered at 1.0).
    refractory_period_ms : float
        The refractory period of the injected spike train (in ms).
    injected_sorting_folder : str | Path | None
        If given, the injected sorting is saved to this folder.
        It must be specified if injected_sorting is None or not serialisable to file.
    seed : int, default: None
        Random seed for amplitude_factor
    
    Returns
    -------
    hybrid_units_recording : HybridUnitsRecording
        The recording containing real and hybrid units.
  __init__(self, parent_recording: 'BaseRecording', templates: 'np.ndarray', injected_sorting: 'Union[BaseSorting, None]' = None, nbefore: 'Union[List[int], int, None]' = None, firing_rate: 'float' = 10, amplitude_factor: 'Union[np.ndarray, None]' = None, amplitude_std: 'float' = 0.0, refractory_period_ms: 'float' = 2.0, injected_sorting_folder: 'Union[str, Path, None]' = None, seed=None)

Function: do_confusion_matrix(event_counts1, event_counts2, match_12, match_event_count)
  Docstring:
    Computes the confusion matrix between one ground truth sorting
    and another sorting.
    
    Parameters
    ----------
    event_counts1 : pd.Series
        Number of event per units 1
    event_counts2 : pd.Series
        Number of event per units 2
    match_12 : pd.Series
        Series of matching from sorting1 to sorting2.
        Can be the hungarian or best match.
    match_event_count : pd.DataFrame
        The match count matrix given by make_match_count_matrix
    
    Returns
    -------
    confusion_matrix : pd.DataFrame
        The confusion matrix
        index are units1 reordered
        columns are units2 redordered

Function: do_count_event(sorting)
  Docstring:
    Count event for each units in a sorting.
    
    Kept for backward compatibility sorting.count_num_spikes_per_unit() is doing the same.
    
    Parameters
    ----------
    sorting : BaseSorting
        A sorting extractor
    
    Returns
    -------
    event_count : pd.Series
        Nb of spike by units.

Function: do_count_score(event_counts1, event_counts2, match_12, match_event_count)
  Docstring:
    For each ground truth units count how many:
    "tp", "fn", "cl", "fp", "num_gt", "num_tested", "tested_id"
    
    Parameters
    ----------
    event_counts1 : pd.Series
        Number of event per units 1
    event_counts2 : pd.Series
        Number of event per units 2
    match_12 : pd.Series
        Series of matching from sorting1 to sorting2.
        Can be the hungarian or best match.
    match_event_count : pd.DataFrame
        The match count matrix given by make_match_count_matrix
    
    Returns
    -------
    count_score : pd.DataFrame
        A table with one line per GT units and columns
        are tp/fn/fp/...

Function: do_score_labels(sorting1, sorting2, delta_frames, unit_map12, label_misclassification=False)
  Docstring:
    Makes the labelling at spike level for each spike train:
      * TP: true positive
      * CL: classification error
      * FN: False negative
      * FP: False positive
      * TOT:
      * TOT_ST1:
      * TOT_ST2:
    
    Parameters
    ----------
    sorting1 : BaseSorting
        The ground truth sorting
    sorting2 : BaseSorting
        The tested sorting
    delta_frames : int
        Number of frames to consider spikes coincident
    unit_map12 : pd.Series
        Dict of matching from sorting1 to sorting2
    label_misclassification : bool
        If True, misclassification errors are labelled
    
    Returns
    -------
    labels_st1 : dict of lists of np.array of str
        Contain score labels for units of sorting 1 for each segment
    labels_st2 : dict of lists of np.array of str
        Contain score labels for units of sorting 2 for each segment

Function: make_agreement_scores(sorting1: 'BaseSorting', sorting2: 'BaseSorting', delta_frames: 'int', ensure_symmetry: 'bool' = True)
  Docstring:
    Make the agreement matrix.
    No threshold (min_score) is applied at this step.
    
    Note : this computation is symmetric by default.
    Inverting sorting1 and sorting2 give the transposed matrix.
    
    Parameters
    ----------
    sorting1 : BaseSorting
        The first sorting extractor
    sorting2 : BaseSorting
        The second sorting extractor
    delta_frames : int
        Number of frames to consider spikes coincident
    ensure_symmetry : bool, default: True
        If ensure_symmetry is True, then the algo is run two times by switching sorting1 and sorting2.
        And the minimum of the two results is taken.
    Returns
    -------
    agreement_scores : pd.DataFrame
        The agreement score matrix.

Function: make_best_match(agreement_scores, min_score) -> "'tuple[pd.Series, pd.Series]'"
  Docstring:
    Given an agreement matrix and a min_score threshold.
    return a dict a best match for each units independently of others.
    
    Note : this is symmetric.
    
    Parameters
    ----------
    agreement_scores : pd.DataFrame
    
    min_score : float
    
    
    Returns
    -------
    best_match_12 : pd.Series
    
    best_match_21 : pd.Series

Function: make_hungarian_match(agreement_scores, min_score)
  Docstring:
    Given an agreement matrix and a min_score threshold.
    return the "optimal" match with the "hungarian" algo.
    This use internally the scipy.optimize.linear_sum_assignment implementation.
    
    Parameters
    ----------
    agreement_scores: pd.DataFrame
    
    min_score : float
    
    
    Returns
    -------
    hungarian_match_12 : pd.Series
    
    hungarian_match_21 : pd.Series

Function: make_match_count_matrix(sorting1: 'BaseSorting', sorting2: 'BaseSorting', delta_frames: 'int', ensure_symmetry: 'bool' = False)
  Docstring:
    Computes a matrix representing the matches between two Sorting objects.
    
    Given two spike trains, this function finds matching spikes based on a temporal proximity criterion
    defined by `delta_frames`. The resulting matrix indicates the number of matches between units
    in `spike_frames_train1` and `spike_frames_train2` for each pair of units.
    
    Note that this algo is not symmetric and is biased with `sorting1` representing ground truth for the comparison
    
    Parameters
    ----------
    sorting1 : Sorting
        An array of integer frame numbers corresponding to spike times for the first train. Must be in ascending order.
    sorting2 : Sorting
        An array of integer frame numbers corresponding to spike times for the second train. Must be in ascending order.
    delta_frames : int
        The inclusive upper limit on the frame difference for which two spikes are considered matching. That is
        if `abs(spike_frames_train1[i] - spike_frames_train2[j]) <= delta_frames` then the spikes at
        `spike_frames_train1[i]` and `spike_frames_train2[j]` are considered matching.
    ensure_symmetry: bool, default False
        If ensure_symmetry=True, then the algo is run two times by switching sorting1 and sorting2.
        And the minimum of the two results is taken.
    Returns
    -------
    matching_matrix : pd.DataFrame
        A 2D pandas DataFrame of shape `(num_units_train1, num_units_train2)`. Each element `[i, j]` represents
        the count of matching spike pairs between unit `i` from `spike_frames_train1` and unit `j` from `spike_frames_train2`.
    
    Notes
    -----
    This algorithm identifies matching spikes between two ordered spike trains.
    By iterating through each spike in the first train, it compares them against spikes in the second train,
    determining matches based on the two spikes frames being within `delta_frames` of each other.
    
    To avoid redundant comparisons the algorithm maintains a reference, `second_train_search_start `,
    which signifies the minimal index in the second spike train that might match the upcoming spike
    in the first train.
    
    The logic can be summarized as follows:
    1. Iterate through each spike in the first train
    2. For each spike, find the first match in the second train.
    3. Save the index of the first match as the new `second_train_search_start `
    3. For each match, find as many matches as possible from the first match onwards.
    
    An important condition here is that the same spike is not matched twice. This is managed by keeping track
    of the last matched frame for each unit pair in `last_match_frame1` and `last_match_frame2`
    There are corner cases where a spike can be counted twice in the spiketrain 2 if there are bouts of bursting activity
    (below delta_frames) in the spiketrain 1. To ensure that the number of matches does not exceed the number of spikes,
    we apply a final clip.
    
    
    For more details on the rationale behind this approach, refer to the documentation of this module and/or
    the metrics section in SpikeForest documentation.

Function: make_possible_match(agreement_scores, min_score)
  Docstring:
    Given an agreement matrix and a min_score threshold.
    Return as a dict all possible match for each spiketrain in each side.
    
    Note : this is symmetric.
    
    Parameters
    ----------
    agreement_scores : pd.DataFrame
    
    min_score : float
    
    
    Returns
    -------
    best_match_12 : dict[NDArray]
    
    best_match_21 : dict[NDArray]

==== DELIM ====
API for module: spikeinterface.curation

Class: CurationSorting
  Docstring:
    Class that handles curation of a Sorting object.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties after split and merge operation. If "keep" the properties will be
        passed to the new units (if the original units have the same value). If "remove" the new units will have
        an empty value for all the properties
    make_graph : bool
        True to keep a Networkx graph instance with the curation history
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units merged
  __init__(self, sorting, make_graph=False, properties_policy='keep')
  Method: draw_graph(self, **kwargs)
    Docstring:
      Draw the curation graph.
      
      Parameters
      ----------
      **kwargs : dict
          Keyword arguments for Networkx draw function
  Method: merge(self, units_to_merge, new_unit_id=None, delta_time_ms=0.4)
    Docstring:
      Merge a list of units into a new unit.
      
      Parameters
      ----------
      units_to_merge : list[str|int]
          List of unit ids to merge
      new_unit_id : int or str
          The new unit id. If None, a new unit id is automatically selected
      delta_time_ms : float
          Number of ms to consider for duplicated spikes. None won't check for duplications
  Method: redo(self)
    Docstring:
      Redo the last operation.
  Method: redo_available(self)
    Docstring:
      Check if redo is available.
      
      Returns
      -------
      bool
          True if redo is available
  Method: remove_empty_units(self)
    Docstring:
      Remove empty units.
  Method: remove_unit(self, unit_id)
    Docstring:
      Remove a unit.
      
      Parameters
      ----------
      unit_id : int ot str
          The unit id to remove
  Method: remove_units(self, unit_ids)
    Docstring:
      Remove a list of units.
      
      Parameters
      ----------
      unit_ids : list[str|int]
          List of unit ids to remove
  Method: rename(self, renamed_unit_ids)
    Docstring:
      Rename a list of units.
      
      Parameters
      ----------
      renamed_unit_ids : list[str|int]
          List of unit ids to rename exisiting units
  Method: select_units(self, unit_ids, renamed_unit_ids=None)
    Docstring:
      Select a list of units.
      
      Parameters
      ----------
      unit_ids : list[str|int]
          List of unit ids to select
      renamed_unit_ids : list or None, default: None
          List of new unit ids to rename the selected units
  Method: split(self, split_unit_id, indices_list, new_unit_ids=None)
    Docstring:
      Split a unit into multiple units.
      
      Parameters
      ----------
      split_unit_id : int or str
          The unit to split
      indices_list : list or np.array
          A list of index arrays selecting the spikes to split in each segment.
          Each array can contain more than 2 indices (e.g. for splitting in 3 or more units) and it should
          be the same length as the spike train (for each segment).
          If the sorting has only one segment, indices_list can be a single array
      new_unit_ids : list[str|int] ot None
          List of new unit ids. If None, a new unit id is automatically selected
  Method: undo(self)
    Docstring:
      Undo the last operation.
  Method: undo_available(self)
    Docstring:
      Check if undo is available.
      
      Returns
      -------
      bool
          True if undo is available

Class: MergeUnitsSorting
  Docstring:
    Class that handles several merges of units from a Sorting object based on a list of lists of unit_ids.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    units_to_merge : list/tuple of lists/tuples
        A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
        but it can also have more (merge multiple units at once).
    new_unit_ids : None or list
        A new unit_ids for merged units. If given, it needs to have the same length as `units_to_merge`
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties. If "keep" the properties will be passed to the new units
         (if the units_to_merge have the same value). If "remove" the new units will have an empty
         value for all the properties of the new unit.
    delta_time_ms : float or None
        Number of ms to consider for duplicated spikes. None won't check for duplications
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units merged
  __init__(self, sorting, units_to_merge, new_unit_ids=None, properties_policy='keep', delta_time_ms=0.4)

Class: SplitUnitSorting
  Docstring:
    Class that handles spliting of a unit. It creates a new Sorting object linked to parent_sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    split_unit_id : int
        Unit id of the unit to split
    indices_list : list or np.array
        A list of index arrays selecting the spikes to split in each segment.
        Each array can contain more than 2 indices (e.g. for splitting in 3 or more units) and it should
        be the same length as the spike train (for each segment).
        If the sorting has only one segment, indices_list can be a single array
    new_unit_ids : int
        Unit ids of the new units to be created
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties. If "keep" the properties will be passed to the new units
         (if the units_to_merge have the same value). If "remove" the new units will have an empty
         value for all the properties of the new unit
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units split
  __init__(self, sorting, split_unit_id, indices_list, new_unit_ids=None, properties_policy='keep')

Function: apply_curation(sorting_or_analyzer, curation_dict, censor_ms=None, new_id_strategy='append', merging_mode='soft', sparsity_overlap=0.75, verbose=False, **job_kwargs)
  Docstring:
    Apply curation dict to a Sorting or a SortingAnalyzer.
    
    Steps are done in this order:
      1. Apply removal using curation_dict["removed_units"]
      2. Apply merges using curation_dict["merge_unit_groups"]
      3. Set labels using curation_dict["manual_labels"]
    
    A new Sorting or SortingAnalyzer (in memory) is returned.
    The user (an adult) has the responsability to save it somewhere (or not).
    
    Parameters
    ----------
    sorting_or_analyzer : Sorting | SortingAnalyzer
        The Sorting object to apply merges.
    curation_dict : dict
        The curation dict.
    censor_ms : float | None, default: None
        When applying the merges, any consecutive spikes within the `censor_ms` are removed. This can be thought of
        as the desired refractory period. If `censor_ms=None`, no spikes are discarded.
    new_id_strategy : "append" | "take_first", default: "append"
        The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
    
            * "append" : new_units_ids will be added at the end of max(sorting.unit_ids)
            * "take_first" : new_unit_ids will be the first unit_id of every list of merges
    merging_mode : "soft" | "hard", default: "soft"
        How merges are performed for SortingAnalyzer. If the `merge_mode` is "soft" , merges will be approximated, with no reloading of
        the waveforms. This will lead to approximations. If `merge_mode` is "hard", recomputations are accurately
        performed, reloading waveforms if needed
    sparsity_overlap : float, default 0.75
        The percentage of overlap that units should share in order to accept merges. If this criteria is not
        achieved, soft merging will not be possible and an error will be raised. This is for use with a SortingAnalyzer input.
    verbose : bool, default: False
        If True, output is verbose
    **job_kwargs : dict
        Job keyword arguments for `merge_units`
    
    Returns
    -------
    sorting_or_analyzer : Sorting | SortingAnalyzer
        The curated object.

Function: apply_sortingview_curation(sorting_or_analyzer, uri_or_json, exclude_labels=None, include_labels=None, skip_merge=False, verbose=None)
  Docstring:
    Apply curation from SortingView manual legacy curation format (before the official "curation_format")
    
    First, merges (if present) are applied. Then labels are loaded and units
    are optionally filtered based on exclude_labels and include_labels.
    
    Parameters
    ----------
    sorting_or_analyzer : Sorting | SortingAnalyzer
        The sorting or analyzer to be curated
    uri_or_json : str or Path
        The URI curation link from SortingView or the path to the curation json file
    exclude_labels : list, default: None
        Optional list of labels to exclude (e.g. ["reject", "noise"]).
        Mutually exclusive with include_labels
    include_labels : list, default: None
        Optional list of labels to include (e.g. ["accept"]).
        Mutually exclusive with exclude_labels,  by default None
    skip_merge : bool, default: False
        If True, merges are not applied (only labels)
    verbose : None
        Deprecated
    
    
    Returns
    -------
    sorting_or_analyzer_curated : BaseSorting
        The curated sorting or analyzer

Function: auto_label_units(sorting_analyzer: spikeinterface.core.sortinganalyzer.SortingAnalyzer, model_folder=None, model_name=None, repo_id=None, label_conversion=None, trust_model=False, trusted=None, export_to_phy=False, enforce_metric_params=False)
  Docstring:
    Automatically labels units based on a model-based classification, either from a model
    hosted on HuggingFaceHub or one available in a local folder.
    
    This function returns the predicted labels and the prediction probabilities, and populates
    the sorting object with the predicted labels and probabilities in the 'classifier_label' and
    'classifier_probability' properties.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The sorting analyzer object containing the spike sorting results.
    model_folder : str or Path, defualt: None
        The path to the folder containing the model
    repo_id : str | Path, default: None
        Hugging face repo id which contains the model e.g. 'username/model'
    model_name: str | Path, default: None
        Filename of model e.g. 'my_model.skops'. If None, uses first model found.
    label_conversion : dic | None, default: None
        A dictionary for converting the predicted labels (which are integers) to custom labels. If None,
        tries to extract from `model_info.json` file. The dictionary should have the format {old_label: new_label}.
    export_to_phy : bool, default: False
        Whether to export the results to Phy format. Default is False.
    trust_model : bool, default: False
        Whether to trust the model. If True, the `trusted` parameter that is passed to `skops.load` to load the model will be
        automatically inferred. If False, the `trusted` parameter must be provided to indicate the trusted objects.
    trusted : list of str, default: None
        Passed to skops.load. The object will be loaded only if there are only trusted objects and objects of types listed in trusted in the dumped file.
    enforce_metric_params : bool, default: False
            If True and the parameters used to compute the metrics in `sorting_analyzer` are different than the parmeters
            used to compute the metrics used to train the model, this function will raise an error. Otherwise, a warning is raised.
    
    
    Returns
    -------
    classified_units : pd.DataFrame
        A dataframe containing the classified units, indexed by the `unit_ids`, containing the predicted label
        and confidence probability of each labelled unit.
    
    Raises
    ------
    ValueError
        If the pipeline is not an instance of sklearn.pipeline.Pipeline.

Function: auto_merge_units(sorting_analyzer: 'SortingAnalyzer', presets: 'list | None' = ['similarity_correlograms'], steps_params: 'dict' = None, steps: 'list[str] | None' = None, recursive: 'bool' = False, censor_ms=None, sparsity_overlap=0.75, merging_mode='soft', new_id_strategy='append', raise_error: 'bool' = False, extra_outputs: 'bool' = False, force_copy: 'bool' = True, **job_kwargs) -> 'SortingAnalyzer'
  Docstring:
    Automatically finds and apply merges.
    This function enables one to launch several merging presets in sequence and also to apply each
    step recursively.
    Merges are applied sequentially or until no more merges are done, one preset at a time, and extensions
    are not recomputed thanks to the merging units. Internally, the function uses _auto_merge_units_single_iteration()
    that is called for every preset and/or combinations of steps
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer
    presets : str or list, default = "similarity_correlograms"
        A single preset or a list of presets, that should be applied iteratively to the data
    steps_params : dict or list of dict, default None
        The params that should be used for the steps or presets. Should be a single dict if only one steps,
        or a list of dict if multiples steps (same size as presets)
    steps : list or list of list, default None
        The list of steps that should be applied. If list of list is provided, then these lists will be applied
        iteratively. Mutually exclusive with presets
    recursive : bool, default: False
        If True, then each presets of the list is applied until no further merges can be done, before trying
        the next one
    censor_ms : None or float, default: None
        When merging units, any spikes violating this refractory period will be discarded.
    merging_mode : "soft" | "hard", default: "soft"
        How merges are performed. In the "soft" mode, merges will be approximated, with no smart merging
        of the extension data.
    sparsity_overlap : float, default 0.75
        The percentage of overlap that units should share in order to accept merges. If this criteria is not
        achieved, soft merging will not be performed.
    new_id_strategy : "append" | "take_first", default: "append"
            The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
                * "append" : new_units_ids will be added at the end of max(sorting.unit_ids)
                * "take_first" : new_unit_ids will be the first unit_id of every list of merges
    raise_error : bool, default: False
        If True, an error is raised if the merges can not be done. Otherwise, warning are displayed
    extra_outputs : bool, default: False
        If True, additional list of merges applied at every preset, and dictionary (`outs`) with processed data are returned.
    force_copy : boolean, default: True
        When new extensions are computed, the default is to make a copy of the analyzer, to avoid overwriting
        already computed extensions. False if you want to overwrite
    
    IMPORTANT: internally, all computations are relying on extensions of the analyzer, that are computed
    with default parameters if not present (i.e. correlograms, template_similarity, ...) If you want to
    have a finer control on these values, please precompute the extensions before applying the auto_merge
    
    If you have errors on sparsity_overlap, this is because you are trying to perform soft_merges for units
    that are barely overlapping. While in theory this should not happen, if this is the case, it means that either
    you are trying to perform too aggressive merges (and thus check params), and/or that you should switch to hard merges.
    
    Returns
    -------
    sorting_analyzer:
        The new sorting analyzer where all the merges from all the presets have been applied
    merges, outs:
        Returned only when extra_outputs=True
        A list with all the merges performed at every steps, and dictionaries that contains data for debugging and plotting.

Function: compute_merge_unit_groups(sorting_analyzer: 'SortingAnalyzer', preset: 'str | None' = 'similarity_correlograms', resolve_graph: 'bool' = True, steps_params: 'dict' = None, compute_needed_extensions: 'bool' = True, extra_outputs: 'bool' = False, steps: 'list[str] | None' = None, force_copy: 'bool' = True, **job_kwargs) -> 'list[tuple[int | str, int | str]] | Tuple[list[tuple[int | str, int | str]], dict]'
  Docstring:
    Algorithm to find and check potential merges between units.
    
    The merges are proposed based on a series of steps with different criteria:
    
        * "num_spikes": enough spikes are found in each unit for computing the correlogram (`min_spikes`)
        * "snr": the SNR of the units is above a threshold (`min_snr`)
        * "remove_contaminated": each unit is not contaminated (by checking auto-correlogram - `contamination_thresh`)
        * "unit_locations": estimated unit locations are close enough (`max_distance_um`)
        * "correlogram": the cross-correlograms of the two units are similar to each auto-corrleogram (`corr_diff_thresh`)
        * "template_similarity": the templates of the two units are similar (`template_diff_thresh`)
        * "presence_distance": the presence of the units is complementary in time (`presence_distance_thresh`)
        * "cross_contamination": the cross-contamination is not significant (`cc_thresh` and `p_value`)
        * "knn": the two units are close in the feature space
        * "quality_score": the unit "quality score" is increased after the merge
    
    The "quality score" factors in the increase in firing rate (**f**) due to the merge and a possible increase in
    contamination (**C**), wheighted by a factor **k** (`firing_contamination_balance`).
    
    .. math::
    
        Q = f(1 - (k + 1)C)
    
    IMPORTANT: internally, all computations are relying on extensions of the analyzer, that are computed
    with default parameters if not present (i.e. correlograms, template_similarity, ...) If you want to
    have a finer control on these values, please precompute the extensions before applying the auto_merge
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer
    preset : "similarity_correlograms" | "x_contaminations" | "temporal_splits" | "feature_neighbors" | None, default: "similarity_correlograms"
        The preset to use for the auto-merge. Presets combine different steps into a recipe and focus on:
    
        * | "similarity_correlograms": mainly focused on template similarity and correlograms.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "correlogram", "quality_score"
        * | "x_contaminations": similar to "similarity_correlograms", but checks for cross-contamination instead of correlograms.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "cross_contamination", "quality_score"
        * | "temporal_splits": focused on finding temporal splits using presence distance.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "presence_distance", "quality_score"
        * | "feature_neighbors": focused on finding unit pairs whose spikes are close in the feature space using kNN.
          | It uses the following steps: "num_spikes", "snr", "remove_contaminated", "unit_locations",
          | "knn", "quality_score"
        If `preset` is None, you can specify the steps manually with the `steps` parameter.
    resolve_graph : bool, default: True
        If True, the function resolves the potential unit pairs to be merged into multiple-unit merges.
    compute_needed_extensions : bool, default : True
        Should we force the computation of needed extensions, if not already computed?
    extra_outputs : bool, default: False
        If True, an additional dictionary (`outs`) with processed data is returned.
    steps : None or list of str, default: None
        Which steps to run, if no preset is used.
        Pontential steps : "num_spikes", "snr", "remove_contaminated", "unit_locations", "correlogram",
        "template_similarity", "presence_distance", "cross_contamination", "knn", "quality_score"
        Please check steps explanations above!
    steps_params : dict
        A dictionary whose keys are the steps, and keys are steps parameters.
    force_copy : boolean, default: True
        When new extensions are computed, the default is to make a copy of the analyzer, to avoid overwriting
        already computed extensions. False if you want to overwrite
    
    Returns
    -------
    merge_unit_groups:
        List of groups that need to be merge.
        When `resolve_graph` is true (default) a list of tuples of 2+ elements
        If `resolve_graph` is false then a list of tuple of 2 elements is returned instead.
    outs:
        Returned only when extra_outputs=True
        A dictionary that contains data for debugging and plotting.
    
    References
    ----------
    This function used to be inspired and built upon similar functions from Lussac [Llobet]_,
    done by Aurelien Wyngaard and Victor Llobet.
    https://github.com/BarbourLab/lussac/blob/v1.0.0/postprocessing/merge_units.py
    
    However, it has been greatly consolidated and refined depending on the presets.

Function: curation_label_to_dataframe(curation_dict)
  Docstring:
    Transform the curation dict into a pandas dataframe.
    For label category with exclusive=True : a column is created and values are the unique label.
    For label category with exclusive=False : one column per possible is created and values are boolean.
    
    If exclusive=False and the same label appears several times then an error is raised.
    
    Parameters
    ----------
    curation_dict : dict
        A curation dictionary
    
    Returns
    -------
    labels : pd.DataFrame
        dataframe with labels.

Class: curation_sorting
  Docstring:
    Class that handles curation of a Sorting object.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties after split and merge operation. If "keep" the properties will be
        passed to the new units (if the original units have the same value). If "remove" the new units will have
        an empty value for all the properties
    make_graph : bool
        True to keep a Networkx graph instance with the curation history
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units merged
  __init__(self, sorting, make_graph=False, properties_policy='keep')
  Method: draw_graph(self, **kwargs)
    Docstring:
      Draw the curation graph.
      
      Parameters
      ----------
      **kwargs : dict
          Keyword arguments for Networkx draw function
  Method: merge(self, units_to_merge, new_unit_id=None, delta_time_ms=0.4)
    Docstring:
      Merge a list of units into a new unit.
      
      Parameters
      ----------
      units_to_merge : list[str|int]
          List of unit ids to merge
      new_unit_id : int or str
          The new unit id. If None, a new unit id is automatically selected
      delta_time_ms : float
          Number of ms to consider for duplicated spikes. None won't check for duplications
  Method: redo(self)
    Docstring:
      Redo the last operation.
  Method: redo_available(self)
    Docstring:
      Check if redo is available.
      
      Returns
      -------
      bool
          True if redo is available
  Method: remove_empty_units(self)
    Docstring:
      Remove empty units.
  Method: remove_unit(self, unit_id)
    Docstring:
      Remove a unit.
      
      Parameters
      ----------
      unit_id : int ot str
          The unit id to remove
  Method: remove_units(self, unit_ids)
    Docstring:
      Remove a list of units.
      
      Parameters
      ----------
      unit_ids : list[str|int]
          List of unit ids to remove
  Method: rename(self, renamed_unit_ids)
    Docstring:
      Rename a list of units.
      
      Parameters
      ----------
      renamed_unit_ids : list[str|int]
          List of unit ids to rename exisiting units
  Method: select_units(self, unit_ids, renamed_unit_ids=None)
    Docstring:
      Select a list of units.
      
      Parameters
      ----------
      unit_ids : list[str|int]
          List of unit ids to select
      renamed_unit_ids : list or None, default: None
          List of new unit ids to rename the selected units
  Method: split(self, split_unit_id, indices_list, new_unit_ids=None)
    Docstring:
      Split a unit into multiple units.
      
      Parameters
      ----------
      split_unit_id : int or str
          The unit to split
      indices_list : list or np.array
          A list of index arrays selecting the spikes to split in each segment.
          Each array can contain more than 2 indices (e.g. for splitting in 3 or more units) and it should
          be the same length as the spike train (for each segment).
          If the sorting has only one segment, indices_list can be a single array
      new_unit_ids : list[str|int] ot None
          List of new unit ids. If None, a new unit id is automatically selected
  Method: undo(self)
    Docstring:
      Undo the last operation.
  Method: undo_available(self)
    Docstring:
      Check if undo is available.
      
      Returns
      -------
      bool
          True if undo is available

Function: find_duplicated_spikes(spike_train, censored_period: 'int', method: "'keep_first' | 'keep_last' | 'keep_first_iterative' | 'keep_last_iterative' | 'random'" = 'random', seed: 'Optional[int]' = None) -> 'np.ndarray'
  Docstring:
    Finds the indices where spikes should be considered duplicates.
    When two spikes are closer together than the censored period,
    one of them is taken out based on the method provided.
    
    Parameters
    ----------
    spike_train : np.ndarray
        The spike train on which to look for duplicated spikes.
    censored_period : int
        The censored period for duplicates (in sample time).
    method : "keep_first" |"keep_last" | "keep_first_iterative" | "keep_last_iterative" |random", default: "random"
        Method used to remove the duplicated spikes.
    seed : int | None
        The seed to use if method="random".
    
    Returns
    -------
    indices_of_duplicates : np.ndarray
        The indices of spikes considered to be duplicates.

Function: find_redundant_units(sorting, delta_time: 'float' = 0.4, agreement_threshold=0.2, duplicate_threshold=0.8)
  Docstring:
    Finds redundant or duplicate units by comparing the sorting output with itself.
    
    Parameters
    ----------
    sorting : BaseSorting
        The input sorting object
    delta_time : float, default: 0.4
        The time in ms to consider matching spikes
    agreement_threshold : float, default: 0.2
        Threshold on the agreement scores to flag possible redundant/duplicate units
    duplicate_threshold : float, default: 0.8
        Final threshold on the portion of coincident events over the number of spikes above which the
        unit is flagged as duplicate/redundant
    
    Returns
    -------
    list
        The list of duplicate units
    list of 2-element lists
        The list of duplicate pairs

Function: get_default_classifier_search_spaces()
  Docstring:
    None

Function: get_potential_auto_merge(sorting_analyzer: 'SortingAnalyzer', preset: 'str | None' = 'similarity_correlograms', resolve_graph: 'bool' = False, min_spikes: 'int' = 100, min_snr: 'float' = 2, max_distance_um: 'float' = 150.0, corr_diff_thresh: 'float' = 0.16, template_diff_thresh: 'float' = 0.25, contamination_thresh: 'float' = 0.2, presence_distance_thresh: 'float' = 100, p_value: 'float' = 0.2, cc_thresh: 'float' = 0.1, censored_period_ms: 'float' = 0.3, refractory_period_ms: 'float' = 1.0, sigma_smooth_ms: 'float' = 0.6, adaptative_window_thresh: 'float' = 0.5, censor_correlograms_ms: 'float' = 0.15, firing_contamination_balance: 'float' = 1.5, k_nn: 'int' = 10, knn_kwargs: 'dict | None' = None, presence_distance_kwargs: 'dict | None' = None, extra_outputs: 'bool' = False, steps: 'list[str] | None' = None) -> 'list[tuple[int | str, int | str]] | Tuple[tuple[int | str, int | str], dict]'
  Docstring:
    This function is deprecated. Use compute_merge_unit_groups() instead.
    This will be removed in 0.103.0
    
    Algorithm to find and check potential merges between units.
    
    The merges are proposed based on a series of steps with different criteria:
    
        * "num_spikes": enough spikes are found in each unit for computing the correlogram (`min_spikes`)
        * "snr": the SNR of the units is above a threshold (`min_snr`)
        * "remove_contaminated": each unit is not contaminated (by checking auto-correlogram - `contamination_thresh`)
        * "unit_locations": estimated unit locations are close enough (`max_distance_um`)
        * "correlogram": the cross-correlograms of the two units are similar to each auto-corrleogram (`corr_diff_thresh`)
        * "template_similarity": the templates of the two units are similar (`template_diff_thresh`)
        * "presence_distance": the presence of the units is complementary in time (`presence_distance_thresh`)
        * "cross_contamination": the cross-contamination is not significant (`cc_thresh` and `p_value`)
        * "knn": the two units are close in the feature space
        * "quality_score": the unit "quality score" is increased after the merge
    
    The "quality score" factors in the increase in firing rate (**f**) due to the merge and a possible increase in
    contamination (**C**), wheighted by a factor **k** (`firing_contamination_balance`).
    
    .. math::
    
        Q = f(1 - (k + 1)C)
    
    IMPORTANT: internally, all computations are relying on extensions of the analyzer, that are computed
    with default parameters if not present (i.e. correlograms, template_similarity, ...) If you want to
    have a finer control on these values, please precompute the extensions before applying the auto_merge
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer
    preset : "similarity_correlograms" | "x_contaminations" | "temporal_splits" | "feature_neighbors" | None, default: "similarity_correlograms"
        The preset to use for the auto-merge. Presets combine different steps into a recipe and focus on:
    
        * | "similarity_correlograms": mainly focused on template similarity and correlograms.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "correlogram", "quality_score"
        * | "x_contaminations": similar to "similarity_correlograms", but checks for cross-contamination instead of correlograms.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "cross_contamination", "quality_score"
        * | "temporal_splits": focused on finding temporal splits using presence distance.
          | It uses the following steps: "num_spikes", "remove_contaminated", "unit_locations",
          | "template_similarity", "presence_distance", "quality_score"
        * | "feature_neighbors": focused on finding unit pairs whose spikes are close in the feature space using kNN.
          | It uses the following steps: "num_spikes", "snr", "remove_contaminated", "unit_locations",
          | "knn", "quality_score"
    
        If `preset` is None, you can specify the steps manually with the `steps` parameter.
    resolve_graph : bool, default: False
        If True, the function resolves the potential unit pairs to be merged into multiple-unit merges.
    min_spikes : int, default: 100
        Minimum number of spikes for each unit to consider a potential merge.
        Enough spikes are needed to estimate the correlogram
    min_snr : float, default 2
        Minimum Signal to Noise ratio for templates to be considered while merging
    max_distance_um : float, default: 150
        Maximum distance between units for considering a merge
    corr_diff_thresh : float, default: 0.16
        The threshold on the "correlogram distance metric" for considering a merge.
        It needs to be between 0 and 1
    template_diff_thresh : float, default: 0.25
        The threshold on the "template distance metric" for considering a merge.
        It needs to be between 0 and 1
    contamination_thresh : float, default: 0.2
        Threshold for not taking in account a unit when it is too contaminated.
    presence_distance_thresh : float, default: 100
        Parameter to control how present two units should be simultaneously.
    p_value : float, default: 0.2
        The p-value threshold for the cross-contamination test.
    cc_thresh : float, default: 0.1
        The threshold on the cross-contamination for considering a merge.
    censored_period_ms : float, default: 0.3
        Used to compute the refractory period violations aka "contamination".
    refractory_period_ms : float, default: 1
        Used to compute the refractory period violations aka "contamination".
    sigma_smooth_ms : float, default: 0.6
        Parameters to smooth the correlogram estimation.
    adaptative_window_thresh : float, default: 0.5
        Parameter to detect the window size in correlogram estimation.
    censor_correlograms_ms : float, default: 0.15
        The period to censor on the auto and cross-correlograms.
    firing_contamination_balance : float, default: 1.5
        Parameter to control the balance between firing rate and contamination in computing unit "quality score".
    k_nn : int, default 5
        The number of neighbors to consider for every spike in the recording.
    knn_kwargs : dict, default None
        The dict of extra params to be passed to knn.
    extra_outputs : bool, default: False
        If True, an additional dictionary (`outs`) with processed data is returned.
    steps : None or list of str, default: None
        Which steps to run, if no preset is used.
        Pontential steps : "num_spikes", "snr", "remove_contaminated", "unit_locations", "correlogram",
        "template_similarity", "presence_distance", "cross_contamination", "knn", "quality_score"
        Please check steps explanations above!
    presence_distance_kwargs : None|dict, default: None
        A dictionary of kwargs to be passed to compute_presence_distance().
    
    Returns
    -------
    potential_merges:
        A list of tuples of 2 elements (if `resolve_graph`if false) or 2+ elements (if `resolve_graph` is true).
        List of pairs that could be merged.
    outs:
        Returned only when extra_outputs=True
        A dictionary that contains data for debugging and plotting.
    
    References
    ----------
    This function is inspired and built upon similar functions from Lussac [Llobet]_,
    done by Aurelien Wyngaard and Victor Llobet.
    https://github.com/BarbourLab/lussac/blob/v1.0.0/postprocessing/merge_units.py

Function: load_model(model_folder=None, repo_id=None, model_name=None, trust_model=False, trusted=None)
  Docstring:
    Loads a model and model_info from a HuggingFaceHub repo or a local folder.
    
    Parameters
    ----------
    model_folder : str or Path, defualt: None
        The path to the folder containing the model
    repo_id : str | Path, default: None
        Hugging face repo id which contains the model e.g. 'username/model'
    model_name: str | Path, default: None
        Filename of model e.g. 'my_model.skops'. If None, uses first model found.
    trust_model : bool, default: False
        Whether to trust the model. If True, the `trusted` parameter that is passed to `skops.load` to load the model will be
        automatically inferred. If False, the `trusted` parameter must be provided to indicate the trusted objects.
    trusted : list of str, default: None
        Passed to skops.load. The object will be loaded only if there are only trusted objects and objects of types listed in trusted in the dumped file.
    
    
    Returns
    -------
    model, model_info
        A model and metadata about the model

Class: merge_units_sorting
  Docstring:
    Class that handles several merges of units from a Sorting object based on a list of lists of unit_ids.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    units_to_merge : list/tuple of lists/tuples
        A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
        but it can also have more (merge multiple units at once).
    new_unit_ids : None or list
        A new unit_ids for merged units. If given, it needs to have the same length as `units_to_merge`
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties. If "keep" the properties will be passed to the new units
         (if the units_to_merge have the same value). If "remove" the new units will have an empty
         value for all the properties of the new unit.
    delta_time_ms : float or None
        Number of ms to consider for duplicated spikes. None won't check for duplications
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units merged
  __init__(self, sorting, units_to_merge, new_unit_ids=None, properties_policy='keep', delta_time_ms=0.4)

Class: remove_duplicated_spikes
  Docstring:
    Class to remove duplicated spikes from the spike trains.
    Spikes are considered duplicated if they are less than x
    ms apart where x is the censored period.
    
    Parameters
    ----------
    sorting : BaseSorting
        The parent sorting.
    censored_period_ms : float
        The censored period to consider 2 spikes to be duplicated (in ms).
    method : "keep_first" | "keep_last" | "keep_first_iterative" | "keep_last_iterative" | "random", default: "keep_first"
        Method used to remove the duplicated spikes.
        If method = "random", will randomly choose to remove the first or last spike.
        If method = "keep_first", for each ISI violation, will remove the second spike.
        If method = "keep_last", for each ISI violation, will remove the first spike.
        If method = "keep_first_iterative", will iteratively keep the first spike and remove the following violations.
        If method = "keep_last_iterative", does the same as "keep_first_iterative" but starting from the end.
        In the iterative methods, if there is a triplet A, B, C where (A, B) and (B, C) are in the censored period
        (but not (A, C)), then only B is removed. In the non iterative methods however, only one spike remains.
    
    Returns
    -------
    sorting_without_duplicated_spikes : Remove_DuplicatedSpikesSorting
        The sorting without any duplicated spikes.
  __init__(self, sorting: 'BaseSorting', censored_period_ms: 'float' = 0.3, method: 'str' = 'keep_first') -> 'None'

Function: remove_excess_spikes(sorting: 'BaseSorting', recording: 'BaseRecording')
  Docstring:
    Remove excess spikes from the spike trains.
    Excess spikes are the ones exceeding a recording number of samples, for each segment.
    
    Parameters
    ----------
    sorting : BaseSorting
        The parent sorting.
    recording : BaseRecording
        The recording to use to get the number of samples.
    
    Returns
    -------
    sorting_without_excess_spikes : Sorting
        The sorting without any excess spikes.

Function: remove_redundant_units(sorting_or_sorting_analyzer, align=True, unit_peak_shifts=None, delta_time=0.4, agreement_threshold=0.2, duplicate_threshold=0.8, remove_strategy='minimum_shift', peak_sign='neg', extra_outputs=False) -> 'BaseSorting'
  Docstring:
    Removes redundant or duplicate units by comparing the sorting output with itself.
    
    When a redundant pair is found, there are several strategies to choose which unit is the best:
    
       * "minimum_shift"
       * "highest_amplitude"
       * "max_spikes"
    
    
    Parameters
    ----------
    sorting_or_sorting_analyzer : BaseSorting or SortingAnalyzer
        If SortingAnalyzer, the spike trains can be optionally realigned using the peak shift in the
        template to improve the matching procedure.
        If BaseSorting, the spike trains are not aligned.
    align : bool, default: False
        If True, spike trains are aligned (if a SortingAnalyzer is used)
    delta_time : float, default: 0.4
        The time in ms to consider matching spikes
    agreement_threshold : float, default: 0.2
        Threshold on the agreement scores to flag possible redundant/duplicate units
    duplicate_threshold : float, default: 0.8
        Final threshold on the portion of coincident events over the number of spikes above which the
        unit is removed
    remove_strategy : "minimum_shift" | "highest_amplitude" | "max_spikes", default: "minimum_shift"
        Which strategy to remove one of the two duplicated units:
    
            * "minimum_shift" : keep the unit with best peak alignment (minimum shift)
                             If shifts are equal then the "highest_amplitude" is used
            * "highest_amplitude" : keep the unit with the best amplitude on unshifted max.
            * "max_spikes" : keep the unit with more spikes
    
    peak_sign : "neg" | "pos" | "both", default: "neg"
        Used when remove_strategy="highest_amplitude"
    extra_outputs : bool, default: False
        If True, will return the redundant pairs.
    unit_peak_shifts : dict
        Dictionary mapping the unit_id to the unit's shift (in number of samples).
        A positive shift means the spike train is shifted back in time, while
        a negative shift means the spike train is shifted forward.
    
    Returns
    -------
    BaseSorting
        Sorting object without redundant units

Class: split_unit_sorting
  Docstring:
    Class that handles spliting of a unit. It creates a new Sorting object linked to parent_sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object
    split_unit_id : int
        Unit id of the unit to split
    indices_list : list or np.array
        A list of index arrays selecting the spikes to split in each segment.
        Each array can contain more than 2 indices (e.g. for splitting in 3 or more units) and it should
        be the same length as the spike train (for each segment).
        If the sorting has only one segment, indices_list can be a single array
    new_unit_ids : int
        Unit ids of the new units to be created
    properties_policy : "keep" | "remove", default: "keep"
        Policy used to propagate properties. If "keep" the properties will be passed to the new units
         (if the units_to_merge have the same value). If "remove" the new units will have an empty
         value for all the properties of the new unit
    
    Returns
    -------
    sorting : Sorting
        Sorting object with the selected units split
  __init__(self, sorting, split_unit_id, indices_list, new_unit_ids=None, properties_policy='keep')

Function: train_model(mode='analyzers', labels=None, analyzers=None, metrics_paths=None, folder=None, metric_names=None, imputation_strategies=None, scaling_techniques=None, classifiers=None, test_size=0.2, overwrite=False, seed=None, search_kwargs=None, verbose=True, enforce_metric_params=False, **job_kwargs)
  Docstring:
    Trains and evaluates machine learning models for spike sorting curation.
    
    This function initializes a ``CurationModelTrainer`` object, loads and preprocesses the data,
    and evaluates the specified combinations of imputation strategies, scaling techniques, and classifiers.
    The evaluation results, including the best model and its parameters, are saved to the output folder.
    
    Parameters
    ----------
    mode : ``"analyzers"`` | ``"csv"``, default: ``"analyzers"``
        Mode to use for training.
    analyzers : list of ``SortingAnalyzer`` | None, default: None
        List of ``SortingAnalyzer`` objects containing the quality metrics and labels to use for training,
        if using ``"analyzers"`` mode.
    labels : list of list | None, default: None
        List of curated labels for each unit; must be in the same order as the metrics data.
    metrics_paths : list of str or None, default: None
        List of paths to the CSV files containing the metrics data if using ``"csv"`` mode.
    folder : str | None, default: None
        The folder where outputs such as models and evaluation metrics will be saved.
    metric_names : list of str | None, default: None
        A list of metrics to use for training. If None, default metrics will be used.
    imputation_strategies : list of str | None, default: None
        A list of imputation strategies to try. Can be ``"knn"``, ``"iterative"``, or any allowed
        strategy passable to the ``sklearn.SimpleImputer``. If None, the default strategies
        ``["median", "most_frequent", "knn", "iterative"]`` will be used.
    scaling_techniques : list of str | None, default: None
        A list of scaling techniques to try. Can be ``"standard_scaler"``, ``"min_max_scaler"``,
        or ``"robust_scaler"``. If None, all techniques will be used.
    classifiers : list of str | dict | None, default: None
        A list of classifiers to evaluate. Optionally, a dictionary of classifiers and their
        hyperparameter search spaces can be provided. If None, default classifiers will be used.
        Check the ``get_classifier_search_space`` method for the default search spaces & format for custom spaces.
    test_size : float, default: 0.2
        Proportion of the dataset to include in the test split, passed to ``train_test_split`` from ``sklearn``.
    overwrite : bool, default: False
        Overwrites the ``folder`` if it already exists.
    seed : int | None, default: None
        Random seed for reproducibility. If None, a random seed will be generated.
    search_kwargs : dict or None, default: None
        Keyword arguments passed to ``BayesSearchCV`` or ``RandomizedSearchCV`` from ``sklearn``. If None, use
        ``search_kwargs = {'cv': 3, 'scoring': 'balanced_accuracy', 'n_iter': 25}``.
    verbose : bool, default: True
        If True, useful information is printed during training.
    enforce_metric_params : bool, default: False
        If True and metric parameters used to calculate metrics for different ``sorting_analyzer`` objects are
        different, an error will be raised.
    
    Returns
    -------
    CurationModelTrainer
        The ``CurationModelTrainer`` object used for training and evaluation.
    
    Notes
    -----
    This function handles the entire workflow of initializing the trainer, loading and preprocessing the data,
    and evaluating the models. The evaluation results are saved to the specified output folder.

Function: validate_curation_dict(curation_dict)
  Docstring:
    Validate that the curation dictionary given as parameter complies with the format
    
    The function do not return anything. This raise an error if something is wring in the format.
    
    Parameters
    ----------
    curation_dict : dict

==== DELIM ====
API for module: spikeinterface.extractors

Class: ALFSortingExtractor
  Docstring:
    Load ALF format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the ALF folder.
    sampling_frequency : int, default: 30000
        The sampling frequency.
    
    Returns
    -------
    extractor : ALFSortingExtractor
        The loaded data.
  __init__(self, folder_path, sampling_frequency=30000)

Class: AlphaOmegaEventExtractor
  Docstring:
    Class for reading events from AlphaOmega MPX file format
    
    Parameters
    ----------
    folder_path : str or Path-like
        The folder path to the AlphaOmega events.
  __init__(self, folder_path)

Class: AlphaOmegaRecordingExtractor
  Docstring:
    Class for reading from AlphaRS and AlphaLab SnR boards.
    
    Based on :py:class:`neo.rawio.AlphaOmegaRawIO`
    
    Parameters
    ----------
    folder_path : str or Path-like
        The folder path to the AlphaOmega recordings.
    lsx_files : list of strings or None, default: None
        A list of files that refers to mpx files to load.
    stream_id : {"RAW", "LFP", "SPK", "ACC", "AI", "UD"}, default: "RAW"
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_alphaomega
    >>> recording = read_alphaomega(folder_path="alphaomega_folder")
  __init__(self, folder_path, lsx_files=None, stream_id='RAW', stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: AxonaRecordingExtractor
  Docstring:
    Class for reading Axona RAW format.
    
    Based on :py:class:`neo.rawio.AxonaRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_axona
    >>> recording = read_axona(file_path=r'my_data.set')
  __init__(self, file_path: 'str | Path', all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: BiocamRecordingExtractor
  Docstring:
    Class for reading data from a Biocam file from 3Brain.
    
    Based on :py:class:`neo.rawio.BiocamRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    mea_pitch : float, default: None
        The inter-electrode distance (pitch) between electrodes.
    electrode_width : float, default: None
        Width of the electrodes in um.
    fill_gaps_strategy: "zeros" | "synthetic_noise" | None, default: None
        The strategy to fill the gaps in the data when using event-based
        compression. If None and the file is event-based compressed,
        you need to specify a fill gaps strategy:
    
        * "zeros": the gaps are filled with unsigned 0s (2048). This value is the "0" of the unsigned 12 bits
                   representation of the data.
        * "synthetic_noise": the gaps are filled with synthetic noise.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, mea_pitch=None, electrode_width=None, fill_gaps_strategy=None, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: BlackrockRecordingExtractor
  Docstring:
    Class for reading BlackRock data.
    
    Based on :py:class:`neo.rawio.BlackrockRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: BlackrockSortingExtractor
  Docstring:
    Class for reading BlackRock spiking data.
    
    Based on :py:class:`neo.rawio.BlackrockRawIO`
    
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from
    stream_id : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    stream_name : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    sampling_frequency : float, default: None
        The sampling frequency for the sorting extractor. When the signal data is available (.ncs) those files will be
        used to extract the frequency automatically. Otherwise, the sampling frequency needs to be specified for
        this extractor to be initialized.
    nsx_to_load : int | list | str, default: None
        IDs of nsX file from which to load data, e.g., if set to 5 only data from the ns5 file are loaded.
        If 'all', then all nsX will be loaded. If None, all nsX files will be loaded. If empty list, no nsX files will be loaded.
  __init__(self, file_path, stream_id: 'Optional[str]' = None, stream_name: 'Optional[str]' = None, sampling_frequency: 'Optional[float]' = None, nsx_to_load: 'Optional[int | list | str]' = None)

Class: CedRecordingExtractor
  Docstring:
    Class for reading smr/smrw CED file.
    
    Based on :py:class:`neo.rawio.CedRawIO` / sonpy
    
    Alternative to read_spike2 which does not handle smrx
    
    Parameters
    ----------
    file_path : str
        The file path to the smr or smrx file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_ced
    >>> recording = read_ced(file_path=r'my_data.smr')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: CellExplorerSortingExtractor
  Docstring:
    Extracts spiking information from `.mat` file stored in the CellExplorer format.
    Spike times are stored in units of seconds so we transform them to units of samples.
    
    The newer version of the format is described here:
    https://cellexplorer.org/data-structure/
    
    Whereas the old format is described here:
    https://github.com/buzsakilab/buzcode/wiki/Data-Formatting-Standards
    
    Parameters
    ----------
    file_path: str | Path
        Path to `.mat` file containing spikes. Usually named `session_id.spikes.cellinfo.mat`
    sampling_frequency: float | None, default: None
        The sampling frequency of the data. If None, it will be extracted from the files.
    session_info_file_path: str | Path | None, default: None
        Path to the `sessionInfo.mat` file. If None, it will be inferred from the file_path.
  __init__(self, file_path: 'str | Path', sampling_frequency: 'float | None' = None, session_info_file_path: 'str | Path | None' = None)

Class: CombinatoSortingExtractor
  Docstring:
    Load Combinato format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the Combinato folder.
    sampling_frequency : int, default: 30000
        The sampling frequency.
    user : str, default: "simple"
        The username that ran the sorting
    det_sign : "both", "pos", "neg", default: "both"
        Which sign was used for detection.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : CombinatoSortingExtractor
        The loaded data.
  __init__(self, folder_path, sampling_frequency=None, user='simple', det_sign='both', keep_good_only=True)

Class: CompressedBinaryIblExtractor
  Docstring:
    Load IBL data as an extractor object.
    
    IBL have a custom format - compressed binary with spikeglx meta.
    
    The format is like spikeglx (have a meta file) but contains:
    
      * "cbin" file (instead of "bin")
      * "ch" file used by mtscomp for compression info
    
    Parameters
    ----------
    folder_path : str or Path
        Path to ibl folder.
    load_sync_channel : bool, default: False
        Load or not the last channel (sync).
        If not then the probe is loaded.
    stream_name : {"ap", "lp"}, default: "ap".
        Whether to load AP or LFP band, one
        of "ap" or "lp".
    cbin_file_path : str, Path or None, default None
        The cbin file of the recording. If None, searches in `folder_path` for file.
    cbin_file : str or None, default None
        (deprecated) The cbin file of the recording. If None, searches in `folder_path` for file.
    
    Returns
    -------
    recording : CompressedBinaryIblExtractor
        The loaded data.
  __init__(self, folder_path=None, load_sync_channel=False, stream_name='ap', cbin_file_path=None, cbin_file=None)

Class: EDFRecordingExtractor
  Docstring:
    Class for reading EDF (European data format) folder.
    
    Based on :py:class:`neo.rawio.EDFRawIO`
    
    Parameters
    ----------
    file_path: str
        The file path to load the recordings from.
    stream_id: str, default: None
        If there are several streams, specify the stream id you want to load.
        For this neo reader streams are defined by their sampling frequency.
    stream_name: str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations: bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: HDSortSortingExtractor
  Docstring:
    Load HDSort format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to HDSort mat file.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : HDSortSortingExtractor
        The loaded data.
  __init__(self, file_path, keep_good_only=True)

Class: HerdingspikesSortingExtractor
  Docstring:
    Load HerdingSpikes format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the ALF folder.
    load_unit_info : bool, default: True
        Whether to load the unit info from the file.
    
    Returns
    -------
    extractor : HerdingSpikesSortingExtractor
        The loaded data.
  __init__(self, file_path, load_unit_info=True)
  Method: load_unit_info(self)
    Docstring:
      if 'centres' in self._rf.keys() and len(self._spike_times) > 0:
          self._unit_locs = self._rf['centres'][()]  # cache for faster access
          for u_i, unit_id in enumerate(self._unit_ids):
              self.set_unit_property(unit_id, property_name='unit_location', value=self._unit_locs[u_i])
      inds = []  # get these only once
      for unit_id in self._unit_ids:
          inds.append(np.where(self._cluster_id == unit_id)[0])
      if 'data' in self._rf.keys() and len(self._spike_times) > 0:
          d = self._rf['data'][()]
          for i, unit_id in enumerate(self._unit_ids):
              self.set_unit_spike_features(unit_id, 'spike_location', d[:, inds[i]].T)
      if 'ch' in self._rf.keys() and len(self._spike_times) > 0:
          d = self._rf['ch'][()]
          for i, unit_id in enumerate(self._unit_ids):
              self.set_unit_spike_features(unit_id, 'max_channel', d[inds[i]])

Class: IblRecordingExtractor
  Docstring:
    Stream IBL data as an extractor object.
    
    Parameters
    ----------
    eid : str or None, default: None
        The session ID to extract recordings for.
        In ONE, this is sometimes referred to as the "eid".
        When doing a session lookup such as
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> sessions = one.alyx.rest("sessions", "list", tag="2022_Q2_IBL_et_al_RepeatedSite")
    
        each returned value in `sessions` refers to it as the "id".
    pid : str or None, default: None
        Probe insertion UUID in Alyx. To retrieve the PID from a session (or eid), use the following code:
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> pids, _ = one.eid2pid("session_eid")
        >>> pid = pids[0]
    
        Either `eid` or `pid` must be provided.
    stream_name : str
        The name of the stream to load for the session.
        These can be retrieved from calling `StreamingIblExtractor.get_stream_names(session="<your session ID>")`.
    load_sync_channel : bool, default: false
        Load or not the last channel (sync).
        If not then the probe is loaded.
    cache_folder : str or None, default: None
        The location to temporarily store chunks of data during streaming.
        The default uses the folder designated by ONE.alyx._par.CACHE_DIR / "cache", which is typically the designated
        "Downloads" folder on your operating system. As long as `remove_cached` is set to True, the only files that will
        persist in this folder are the metadata header files and the chunk of data being actively streamed and used in RAM.
    remove_cached : bool, default: True
        Whether or not to remove streamed data from the cache immediately after it is read.
        If you expect to reuse fetched data many times, and have the disk space available, it is recommended to set this to False.
    stream : bool, default: True
        Whether or not to stream the data.
    one : one.api.OneAlyx, default: None
        An instance of the ONE API to use for data loading.
        If not provided, a default instance is created using the default parameters.
        If you need to use a specific instance, you can create it using the ONE API and pass it here.
    
    Returns
    -------
    recording : IblStreamingRecordingExtractor
        The recording extractor which allows access to the traces.
  __init__(self, eid: 'str | None' = None, pid: 'str | None' = None, stream_name: 'str | None' = None, load_sync_channel: 'bool' = False, cache_folder: 'Optional[Path | str]' = None, remove_cached: 'bool' = True, stream: 'bool' = True, one: "'one.api.OneAlyx'" = None, stream_type: 'str | None' = None)
  Method: get_stream_names(eid: 'str', cache_folder: 'Optional[Union[Path, str]]' = None, one=None) -> 'List[str]'
    Docstring:
      Convenient retrieval of available stream names.
      
      Parameters
      ----------
      eid : str
          The experiment ID to extract recordings for.
          In ONE, this is sometimes referred to as the "eid".
          When doing a session lookup such as
      
          >>> from one.api import ONE
          >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
          >>> eids = one.alyx.rest("sessions", "list", tag="2022_Q2_IBL_et_al_RepeatedSite")
      
          each returned value in `eids` refers to it as the experiment "id".
      cache_folder : str or None, default: None
          The location to temporarily store chunks of data during streaming.
      one : one.api.OneAlyx, default: None
          An instance of the ONE API to use for data loading.
          If not provided, a default instance is created using the default parameters.
          If you need to use a specific instance, you can create it using the ONE API and pass it here.
      stream_type : "ap" | "lf" | None, default: None
          The stream type to load, required when pid is provided and stream_name is not.
      
      Returns
      -------
      stream_names : list of str
          List of stream names as expected by the `stream_name` argument for the class initialization.

Class: IblSortingExtractor
  Docstring:
    Load IBL data as a sorting extractor.
    
    Parameters
    ----------
    pid: str
        Probe insertion UUID in Alyx. To retrieve the PID from a session (or eid), use the following code:
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> pids, _ = one.eid2pid("session_eid")
        >>> pid = pids[0]
    one: One | dict, required
        Instance of ONE.api or dict to use for data loading.
        For multi-processing applications, this can also be a dictionary of ONE.api arguments
        For example: one=dict(base_url='https://alyx.internationalbrainlab.org', mode='remote')
    good_clusters_only: bool, default: False
        If True, only load the good clusters
    load_unit_properties: bool, default: True
        If True, load the unit properties from the IBL database
    kwargs: dict, optional
        Additional keyword arguments to pass to the IBL SpikeSortingLoader constructor, such as `revision`.
    Returns
    -------
    extractor : IBLSortingExtractor
        The loaded data.
  __init__(self, pid: 'str', good_clusters_only: 'bool' = False, load_unit_properties: 'bool' = True, one=None, **kwargs)

Class: IntanRecordingExtractor
  Docstring:
    Class for reading data from a intan board. Supports rhd and rhs format.
    
    Based on :py:class:`neo.rawio.IntanRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    ignore_integrity_checks : bool, default: False.
        If True, data that violates integrity assumptions will be loaded. At the moment the only integrity
        check we perform is that timestamps are continuous. Setting this to True will ignore this check and set
        the attribute `discontinuous_timestamps` to True in the underlying neo object.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        In Intan the ids provided by NeoRawIO are the hardware channel ids while the names are custom names given by
        the user
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_intan
    # intan amplifier data is stored in stream_id = '0'
    >>> recording = read_intan(file_path=r'my_data.rhd', stream_id='0')
    # intan has multi-file formats as well, but in this case our path should point to the header file 'info.rhd'
    >>> recording = read_intan(file_path=r'info.rhd', stream_id='0')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations=False, use_names_as_ids=False, ignore_integrity_checks: 'bool' = False)

Class: KiloSortSortingExtractor
  Docstring:
    Load Kilosort format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the output Phy folder (containing the params.py).
    keep_good_only : bool, default: True
        Whether to only keep good units.
        If True, only Kilosort-labeled 'good' units are returned.
    remove_empty_units : bool, default: True
        If True, empty units are removed from the sorting extractor.
    
    Returns
    -------
    extractor : KiloSortSortingExtractor
        The loaded Sorting object.
  __init__(self, folder_path: 'Path | str', keep_good_only: 'bool' = False, remove_empty_units: 'bool' = True)

Class: KlustaSortingExtractor
  Docstring:
    Load Klusta format data as a sorting extractor.
    
    Parameters
    ----------
    file_or_folder_path : str or Path
        Path to the ALF folder.
    exclude_cluster_groups : list or str, default: None
        Cluster groups to exclude (e.g. "noise" or ["noise", "mua"]).
    
    Returns
    -------
    extractor : KlustaSortingExtractor
        The loaded data.
  __init__(self, file_or_folder_path, exclude_cluster_groups=None)

Class: MCSH5RecordingExtractor
  Docstring:
    Load a MCS H5 file as a recording extractor.
    
    Parameters
    ----------
    file_path : str or Path
        The path to the MCS h5 file.
    stream_id : int, default: 0
        The stream ID to load.
    
    Returns
    -------
    recording : MCSH5RecordingExtractor
        The loaded data.
  __init__(self, file_path, stream_id=0)

Class: MCSRawRecordingExtractor
  Docstring:
    Class for reading data from "Raw" Multi Channel System (MCS) format.
    This format is NOT the native MCS format (.mcd).
    This format is a raw format with an internal binary header exported by the
    "MC_DataTool binary conversion" with the option header selected.
    
    Based on :py:class:`neo.rawio.RawMCSRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    block_index : int, default: None
        If there are several blocks, specify the block index you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, use_names_as_ids: 'bool' = False)

Class: MClustSortingExtractor
  Docstring:
    Load MClust sorting solution as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to folder with t files.
    sampling_frequency : sampling frequency
        sampling frequency in Hz.
    sampling_frequency_raw: float or None, default: None
        Required to read files with raw formats. In that case, the samples are saved in the same
        unit as the input data
        Examples:
            - If raw time is in tens of ms sampling_frequency_raw=10000
            - If raw time is in samples sampling_frequency_raw=sampling_frequency
    Returns
    -------
    extractor : MClustSortingExtractor
        Loaded data.
  __init__(self, folder_path, sampling_frequency, sampling_frequency_raw=None)

Class: MEArecRecordingExtractor
  Docstring:
    Class for reading data from a MEArec simulated data.
    
    Based on :py:class:`neo.rawio.MEArecRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path: 'Union[str, Path]', all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: MEArecSortingExtractor
  Docstring:
    Abstract class representing several segment several units and relative spiketrains.
  __init__(self, file_path: 'Union[str, Path]')
  Method: read_sampling_frequency(self, file_path: 'Union[str, Path]') -> 'float'
    Docstring:
      None

Class: MaxwellEventExtractor
  Docstring:
    Class for reading TTL events from Maxwell files.
  __init__(self, file_path)

Class: MaxwellRecordingExtractor
  Docstring:
    Class for reading data from Maxwell device.
    It handles MaxOne (old and new format) and MaxTwo.
    
    Based on :py:class:`neo.rawio.MaxwellRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to the maxwell h5 file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
        For MaxTwo when there are several wells at the same time you
        need to specify stream_id='well000' or 'well0001', etc.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    rec_name : str, default: None
        When the file contains several recordings you need to specify the one
        you want to extract. (rec_name='rec0000').
    install_maxwell_plugin : bool, default: False
        If True, install the maxwell plugin for neo.
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, rec_name=None, install_maxwell_plugin=False, use_names_as_ids: 'bool' = False)
  Method: install_maxwell_plugin(self, force_download=False)
    Docstring:
      None

Class: MdaRecordingExtractor
  Docstring:
    Load MDA format data as a recording extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the MDA folder.
    raw_fname : str, default: "raw.mda"
        File name of raw file
    params_fname : str, default: "params.json"
        File name of params file
    geom_fname : str, default: "geom.csv"
        File name of geom file
    
    Returns
    -------
    extractor : MdaRecordingExtractor
        The loaded data.
  __init__(self, folder_path, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv')
  Method: write_recording(recording, save_path, params={}, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv', dtype=None, verbose=False, **job_kwargs)
    Docstring:
      Write a recording to file in MDA format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor to be saved.
      save_path : str or Path
          The folder to save the Mda files.
      params : dictionary
          Dictionary with optional parameters to save metadata.
          Sampling frequency is appended to this dictionary.
      raw_fname : str, default: "raw.mda"
          File name of raw file
      params_fname : str, default: "params.json"
          File name of params file
      geom_fname : str, default: "geom.csv"
          File name of geom file
      dtype : dtype or None, default: None
          Data type to be used. If None dtype is same as recording traces.
      verbose : bool
          If True, shows progress bar when saving recording.
      **job_kwargs:
          Use by job_tools modules to set:
      
              * chunk_size or chunk_memory, or total_memory
              * n_jobs
              * progress_bar

Class: MdaSortingExtractor
  Docstring:
    Load MDA format data as a sorting extractor.
    
    NOTE: As in the MDA format, the max_channel property indexes the channels that are given as input
    to the sorter.
    If sorting was run on a subset of channels of the recording, then the max_channel values are
    based on that subset, so care must be taken when associating these values with a recording.
    If additional sorting segments are added to this sorting extractor after initialization,
    then max_channel will not be updated. The max_channel indices begin at 1.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the MDA file.
    sampling_frequency : int
        The sampling frequency.
    
    Returns
    -------
    extractor : MdaRecordingExtractor
        The loaded data.
  __init__(self, file_path, sampling_frequency)
  Method: write_sorting(sorting, save_path, write_primary_channels=False)
    Docstring:
      None

Class: NeuralynxRecordingExtractor
  Docstring:
    Class for reading neuralynx folder
    
    Based on :py:class:`neo.rawio.NeuralynxRawIO`
    
    Parameters
    ----------
    folder_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    exclude_filename : list[str], default: None
        List of filename to exclude from the loading.
        For example, use `exclude_filename=["events.nev"]` to skip loading the event file.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    strict_gap_mode : bool, default: False
        See neo documentation.
        Detect gaps using strict mode or not.
        * strict_gap_mode = True then a gap is consider when timstamp difference between
        two consecutive data packets is more than one sample interval.
        * strict_gap_mode = False then a gap has an increased tolerance. Some new systems
        with different clocks need this option otherwise, too many gaps are detected
    
        Note that here the default is False contrary to neo.
  __init__(self, folder_path: 'str | Path', stream_id=None, stream_name=None, all_annotations=False, exclude_filename=None, strict_gap_mode=False, use_names_as_ids: 'bool' = False)

Class: NeuralynxSortingExtractor
  Docstring:
    Class for reading spike data from a folder with neuralynx spiking data (i.e .nse and .ntt formats).
    
    Based on :py:class:`neo.rawio.NeuralynxRawIO`
    
    Parameters
    ----------
    folder_path : str
        The file path to load the recordings from.
    sampling_frequency : float
        The sampling frequency for the spiking channels. When the signal data is available (.ncs) those files will be
        used to extract the frequency. Otherwise, the sampling frequency needs to be specified for this extractor.
    stream_id : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    stream_name : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
  __init__(self, folder_path: 'str', sampling_frequency: 'Optional[float]' = None, stream_id: 'Optional[str]' = None, stream_name: 'Optional[str]' = None)

Class: NeuroExplorerRecordingExtractor
  Docstring:
    Class for reading NEX (NeuroExplorer data format) files.
    
    Based on :py:class:`neo.rawio.NeuroExplorerRawIO`
    
    Importantly, at the moment, this recorder only extracts one channel of the recording.
    This is because the NeuroExplorerRawIO class does not support multi-channel recordings
    as in the NeuroExplorer format they might have different sampling rates.
    
    Consider extracting all the channels and then concatenating them with the aggregate_channels function.
    
    >>> from spikeinterface.extractors.neoextractors.neuroexplorer import NeuroExplorerRecordingExtractor
    >>> from spikeinterface.core import aggregate_channels
    >>>
    >>> file_path="/the/path/to/your/nex/file.nex"
    >>>
    >>> streams = NeuroExplorerRecordingExtractor.get_streams(file_path=file_path)
    >>> stream_names = streams[0]
    >>>
    >>> your_signal_stream_names = "Here goes the logic to filter from stream names the ones that you know have the same sampling rate and you want to aggregate"
    >>>
    >>> recording_list = [NeuroExplorerRecordingExtractor(file_path=file_path, stream_name=stream_name) for stream_name in your_signal_stream_names]
    >>> recording = aggregate_channels(recording_list)
    
    
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
        For this neo reader streams are defined by their sampling frequency.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: NeuroNexusRecordingExtractor
  Docstring:
    Class for reading data from NeuroNexus Allego.
    
    Based on :py:class:`neo.rawio.NeuronexusRawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path to the metadata .xdat.json file of an Allego session
    stream_id : str | None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str | None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        In Neuronexus the ids provided by NeoRawIO are the hardware channel ids stored as `ntv_chan_name` within
        the metada and the names are the `chan_names`
  __init__(self, file_path: 'str | Path', stream_id: 'str | None' = None, stream_name: 'str | None' = None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: NeuroScopeRecordingExtractor
  Docstring:
    Class for reading data from neuroscope
    Ref: http://neuroscope.sourceforge.net
    
    Based on :py:class:`neo.rawio.NeuroScopeRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to the binary container usually a .dat, .lfp, .eeg extension.
    xml_file_path : str, default: None
        The path to the xml file. If None, the xml file is assumed to have the same name as the binary file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, xml_file_path=None, stream_id=None, stream_name: 'bool' = None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: NeuroScopeSortingExtractor
  Docstring:
    Extracts spiking information from an arbitrary number of .res.%i and .clu.%i files in the general folder path.
    
    The .res is a text file with a sorted list of spiketimes from all units displayed in sample (integer "%i") units.
    The .clu file is a file with one more row than the .res with the first row corresponding to the total number of
    unique ids in the file (and may exclude 0 & 1 from this count)
    with the rest of the rows indicating which unit id the corresponding entry in the .res file refers to.
    The group id is loaded as unit property "group".
    
    In the original Neuroscope format:
        Unit ID 0 is the cluster of unsorted spikes (noise).
        Unit ID 1 is a cluster of multi-unit spikes.
    
    The function defaults to returning multi-unit activity as the first index, and ignoring unsorted noise.
    To return only the fully sorted units, set keep_mua_units=False.
    
    The sorting extractor always returns unit IDs from 1, ..., number of chosen clusters.
    
    Parameters
    ----------
    folder_path : str
        Optional. Path to the collection of .res and .clu text files. Will auto-detect format.
    resfile_path : PathType
        Optional. Path to a particular .res text file. If given, only the single .res file
        (and the respective .clu file) are loaded
    clufile_path : PathType
        Optional. Path to a particular .clu text file. If given, only the single .clu file
        (and the respective .res file) are loaded
    keep_mua_units : bool, default: True
        Optional. Whether or not to return sorted spikes from multi-unit activity
    exclude_shanks : list
        Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the
        final integer of all the .res.%i and .clu.%i pairs.
    xml_file_path : PathType, default: None
        Path to the .xml file referenced by this sorting.
  __init__(self, folder_path: 'OptionalPathType' = None, resfile_path: 'OptionalPathType' = None, clufile_path: 'OptionalPathType' = None, keep_mua_units: 'bool' = True, exclude_shanks: 'Optional[list]' = None, xml_file_path: 'OptionalPathType' = None)

Class: NixRecordingExtractor
  Docstring:
    Class for reading Nix file
    
    Based on :py:class:`neo.rawio.NIXRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    block_index : int, default: None
        If there are several blocks, specify the block index you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: NwbRecordingExtractor
  Docstring:
    Load an NWBFile as a RecordingExtractor.
    
    Parameters
    ----------
    file_path : str, Path, or None
        Path to the NWB file or an s3 URL. Use this parameter to specify the file location
        if not using the `file` parameter.
    electrical_series_name : str or None, default: None
        Deprecated, use `electrical_series_path` instead.
    electrical_series_path : str or None, default: None
        The name of the ElectricalSeries object within the NWB file. This parameter is crucial
        when the NWB file contains multiple ElectricalSeries objects. It helps in identifying
        which specific series to extract data from. If there is only one ElectricalSeries and
        this parameter is not set, that unique series will be used by default.
        If multiple ElectricalSeries are present and this parameter is not set, an error is raised.
        The `electrical_series_path` corresponds to the path within the NWB file, e.g.,
        'acquisition/MyElectricalSeries`.
    load_time_vector : bool, default: False
        If set to True, the time vector is also loaded into the recording object. Useful for
        cases where precise timing information is required.
    samples_for_rate_estimation : int, default: 1000
        The number of timestamp samples used for estimating the sampling rate. This is relevant
        when the 'rate' attribute is not available in the ElectricalSeries.
    stream_mode : "fsspec" | "remfile" | "zarr" | None, default: None
        Determines the streaming mode for reading the file. Use this for optimized reading from
        different sources, such as local disk or remote servers.
    load_channel_properties : bool, default: True
        If True, all the channel properties are loaded from the NWB file and stored as properties.
        For streaming purposes, it can be useful to set this to False to speed up streaming.
    file : file-like object or None, default: None
        A file-like object representing the NWB file. Use this parameter if you have an in-memory
        representation of the NWB file instead of a file path.
    cache : bool, default: False
        Indicates whether to cache the file locally when using streaming. Caching can improve performance for
        remote files.
    stream_cache_path : str, Path, or None, default: None
        Specifies the local path for caching the file. Relevant only if `cache` is True.
    storage_options : dict | None = None,
        These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
        This is only used on the "zarr" stream_mode.
    use_pynwb : bool, default: False
        Uses the pynwb library to read the NWB file. Setting this to False, the default, uses h5py
        to read the file. Using h5py can improve performance by bypassing some of the PyNWB validations.
    
    Returns
    -------
    recording : NwbRecordingExtractor
        The recording extractor for the NWB file.
    
    Examples
    --------
    Run on local file:
    
    >>> from spikeinterface.extractors.nwbextractors import NwbRecordingExtractor
    >>> rec = NwbRecordingExtractor(filepath)
    
    Run on s3 URL from the DANDI Archive:
    
    >>> from spikeinterface.extractors.nwbextractors import NwbRecordingExtractor
    >>> from dandi.dandiapi import DandiAPIClient
    >>>
    >>> # get s3 path
    >>> dandiset_id = "001054"
    >>> filepath = "sub-Dory/sub-Dory_ses-2020-09-14-004_ecephys.nwb"
    >>> with DandiAPIClient() as client:
    >>>     asset = client.get_dandiset(dandiset_id).get_asset_by_path(filepath)
    >>>     s3_url = asset.get_content_url(follow_redirects=1, strip_query=True)
    >>>
    >>> rec = NwbRecordingExtractor(s3_url, stream_mode="remfile")
  __init__(self, file_path: 'str | Path | None' = None, electrical_series_name: 'str | None' = None, load_time_vector: 'bool' = False, samples_for_rate_estimation: 'int' = 1000, stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, stream_cache_path: 'str | Path | None' = None, electrical_series_path: 'str | None' = None, load_channel_properties: 'bool' = True, *, file: 'BinaryIO | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)
  Method: fetch_available_electrical_series_paths(file_path: 'str | Path', stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, storage_options: 'dict | None' = None) -> 'list[str]'
    Docstring:
      Retrieves the paths to all ElectricalSeries objects within a neurodata file.
      
      Parameters
      ----------
      file_path : str | Path
          The path to the neurodata file to be analyzed.
      stream_mode : "fsspec" | "remfile" | "zarr" | None, optional
          Determines the streaming mode for reading the file. Use this for optimized reading from
          different sources, such as local disk or remote servers.
      storage_options : dict | None = None,
          These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
          This is only used on the "zarr" stream_mode.
      Returns
      -------
      list of str
          A list of paths to all ElectricalSeries objects found in the file.
      
      
      Notes
      -----
      The paths are returned as strings, and can be used to load the desired ElectricalSeries object.
      Examples of paths are:
          - "acquisition/ElectricalSeries1"
          - "acquisition/ElectricalSeries2"
          - "processing/ecephys/LFP/ElectricalSeries1"
          - "processing/my_custom_module/MyContainer/ElectricalSeries2"

Class: NwbSortingExtractor
  Docstring:
    Load an NWBFile as a SortingExtractor.
    Parameters
    ----------
    file_path : str or Path
        Path to NWB file.
    electrical_series_path : str or None, default: None
        The name of the ElectricalSeries (if multiple ElectricalSeries are present).
    sampling_frequency : float or None, default: None
        The sampling frequency in Hz (required if no ElectricalSeries is available).
    unit_table_path : str or None, default: "units"
        The path of the unit table in the NWB file.
    samples_for_rate_estimation : int, default: 100000
        The number of timestamp samples to use to estimate the rate.
        Used if "rate" is not specified in the ElectricalSeries.
    stream_mode : "fsspec" | "remfile" | "zarr" | None, default: None
        The streaming mode to use. If None it assumes the file is on the local disk.
    stream_cache_path : str or Path or None, default: None
        Local path for caching. If None it uses the system temporary directory.
    load_unit_properties : bool, default: True
        If True, all the unit properties are loaded from the NWB file and stored as properties.
    t_start : float or None, default: None
        This is the time at which the corresponding ElectricalSeries start. NWB stores its spikes as times
        and the `t_start` is used to convert the times to seconds. Concrently, the returned frames are computed as:
    
        `frames = (times - t_start) * sampling_frequency`.
    
        As SpikeInterface always considers the first frame to be at the beginning of the recording independently
        of the `t_start`.
    
        When a `t_start` is not provided it will be inferred from the corresponding ElectricalSeries with name equal
        to `electrical_series_path`. The `t_start` then will be either the `ElectricalSeries.starting_time` or the
        first timestamp in the `ElectricalSeries.timestamps`.
    cache : bool, default: False
        If True, the file is cached in the file passed to stream_cache_path
        if False, the file is not cached.
    storage_options : dict | None = None,
        These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
        This is only used on the "zarr" stream_mode.
    use_pynwb : bool, default: False
        Uses the pynwb library to read the NWB file. Setting this to False, the default, uses h5py
        to read the file. Using h5py can improve performance by bypassing some of the PyNWB validations.
    
    Returns
    -------
    sorting : NwbSortingExtractor
        The sorting extractor for the NWB file.
  __init__(self, file_path: 'str | Path', electrical_series_path: 'str | None' = None, sampling_frequency: 'float | None' = None, samples_for_rate_estimation: 'int' = 1000, stream_mode: 'str | None' = None, stream_cache_path: 'str | Path | None' = None, load_unit_properties: 'bool' = True, unit_table_path: 'str' = 'units', *, t_start: 'float | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)

Class: NwbTimeSeriesExtractor
  Docstring:
    Load a TimeSeries from an NWBFile as a RecordingExtractor.
    
    Parameters
    ----------
    file_path : str | Path | None
        Path to NWB file or an s3 URL. Use this parameter to specify the file location
        if not using the `file` parameter.
    timeseries_path : str | None
        The path to the TimeSeries object within the NWB file. This parameter is required
        when the NWB file contains multiple TimeSeries objects. The path corresponds to
        the location within the NWB file hierarchy, e.g. 'acquisition/MyTimeSeries'.
    load_time_vector : bool, default: False
        If True, the time vector is loaded into the recording object. Useful when
        precise timing information is needed.
    samples_for_rate_estimation : int, default: 1000
        The number of timestamps used for estimating the sampling rate when
        timestamps are used instead of a fixed rate.
    stream_mode : Literal["fsspec", "remfile", "zarr"] | None, default: None
        Determines the streaming mode for reading the file.
    file : BinaryIO | None, default: None
        A file-like object representing the NWB file. Use this parameter if you have
        an in-memory representation of the NWB file instead of a file path given by `file_path`.
    cache : bool, default: False
        If True, the file is cached locally when using streaming.
    stream_cache_path : str | Path | None, default: None
        Local path for caching. Only used if `cache` is True.
    storage_options : dict | None, default: None
        Additional kwargs (e.g. AWS credentials) passed to zarr.open. Only used with
        "zarr" stream_mode.
    use_pynwb : bool, default: False
        If True, uses pynwb library to read the NWB file. Default False uses h5py/zarr
        directly for better performance.
    
    Returns
    -------
    recording : NwbTimeSeriesExtractor
        A recording extractor containing the TimeSeries data.
  __init__(self, file_path: 'str | Path | None' = None, timeseries_path: 'str | None' = None, load_time_vector: 'bool' = False, samples_for_rate_estimation: 'int' = 1000, stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, stream_cache_path: 'str | Path | None' = None, *, file: 'BinaryIO | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)
  Method: fetch_available_timeseries_paths(file_path: 'str | Path', stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, storage_options: 'dict | None' = None) -> 'list[str]'
    Docstring:
      Get paths to all TimeSeries objects in a neurodata file.
      
      Parameters
      ----------
      file_path : str | Path
          Path to the NWB file.
      stream_mode : str | None
          Streaming mode for reading remote files.
      storage_options : dict | None
          Additional options for zarr storage.
      
      Returns
      -------
      list[str]
          List of paths to TimeSeries objects.

Class: OpenEphysBinaryEventExtractor
  Docstring:
    Class for reading events saved by the Open Ephys GUI
    
    This extractor works with the  Open Ephys "binary" format, which saves data using
    one file per continuous stream.
    
    https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Binary-format.html
    
    Based on neo.rawio.OpenEphysBinaryRawIO
    
    Parameters
    ----------
    folder_path : str
  __init__(self, folder_path, block_index=None)

Class: OpenEphysBinaryRecordingExtractor
  Docstring:
    Class for reading data saved by the Open Ephys GUI.
    
    This extractor works with the  Open Ephys "binary" format, which saves data using
    one file per continuous stream (.dat files).
    
    https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Binary-format.html
    
    Based on neo.rawio.OpenEphysBinaryRawIO
    
    Parameters
    ----------
    folder_path : str
        The folder path to the root folder (containing the record node folders)
    load_sync_channel : bool, default: False
        If False (default) and a SYNC channel is present (e.g. Neuropixels), this is not loaded
        If True, the SYNC channel is loaded and can be accessed in the analog signals.
    load_sync_timestamps : bool, default: False
        If True, the synchronized_timestamps are loaded and set as times to the recording.
        If False (default), only the t_start and sampling rate are set, and timestamps are assumed
        to be uniform and linearly increasing
    experiment_names : str, list, or None, default: None
        If multiple experiments are available, this argument allows users to select one
        or more experiments. If None, all experiements are loaded as blocks.
        E.g. `experiment_names="experiment2"`, `experiment_names=["experiment1", "experiment2"]`
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
    all_annotations : bool, default: False
        Load exhaustively all annotation from neo
    
    Notes
    -----
    If no stream is explicitly specified and there are exactly two streams (neural data and
    synchronization data), the neural data stream will be automatically selected.
  __init__(self, folder_path: 'str | Path', load_sync_channel: 'bool' = False, load_sync_timestamps: 'bool' = False, experiment_names: 'str | list | None' = None, stream_id: 'str' = None, stream_name: 'str' = None, block_index: 'int' = None, all_annotations: 'bool' = False)

Class: OpenEphysLegacyRecordingExtractor
  Docstring:
    Class for reading data saved by the Open Ephys GUI.
    
    This extractor works with the Open Ephys "legacy" format, which saves data using
    one file per continuous channel (.continuous files).
    
    https://open-ephys.github.io/gui-docs/User-Manual/Recording-data/Open-Ephys-format.html
    
    Based on :py:class:`neo.rawio.OpenEphysRawIO`
    
    Parameters
    ----------
    folder_path : str
        The folder path to load the recordings from
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
    all_annotations : bool, default: False
        Load exhaustively all annotation from neo
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    ignore_timestamps_errors : None
        Deprecated keyword argument. This is now ignored.
        neo.OpenEphysRawIO is now handling gaps directly but makes the read slower.
  __init__(self, folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False, ignore_timestamps_errors: 'bool' = None)

Class: PhySortingExtractor
  Docstring:
    Load Phy format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the output Phy folder (containing the params.py).
    exclude_cluster_groups : list or str, default: None
        Cluster groups to exclude (e.g. "noise" or ["noise", "mua"]).
    load_all_cluster_properties : bool, default: True
        If True, all cluster properties are loaded from the tsv/csv files.
    
    Returns
    -------
    extractor : PhySortingExtractor
        The loaded Sorting object.
  __init__(self, folder_path: 'Path | str', exclude_cluster_groups: 'Optional[list[str] | str]' = None, load_all_cluster_properties: 'bool' = True)

Class: Plexon2EventExtractor
  Docstring:
    Class for reading plexon spiking data from .pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    folder_path : str
  __init__(self, folder_path, block_index=None)

Class: Plexon2RecordingExtractor
  Docstring:
    Class for reading plexon pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path of the plexon2 file. It should have the .pl2 extension.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    use_names_as_ids : bool, default: True
        If True, the names of the signals are used as channel ids. If False, the channel ids are a combination of the
        source id and the channel index.
    
        Example for wideband signals:
            names: ["WB01", "WB02", "WB03", "WB04"]
            ids: ["source3.1" , "source3.2", "source3.3", "source3.4"]
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    reading_attempts : int, default: 25
        Number of attempts to read the file before raising an error
        This opening process is somewhat unreliable and might fail occasionally. Adjust this higher
        if you encounter problems in opening the file.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_plexon2
    >>> recording = read_plexon2(file_path=r'my_data.pl2')
  __init__(self, file_path, stream_id=None, stream_name=None, use_names_as_ids=True, all_annotations=False, reading_attempts: 'int' = 25)

Class: Plexon2SortingExtractor
  Docstring:
    Class for reading plexon spiking data from .pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    sampling_frequency : float, default: None
        The sampling frequency of the sorting (required for multiple streams with different sampling frequencies).
  __init__(self, file_path, sampling_frequency=None)

Class: PlexonRecordingExtractor
  Docstring:
    Class for reading plexon plx files.
    
    Based on :py:class:`neo.rawio.PlexonRawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: True
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        Example for wideband signals:
            names: ["WB01", "WB02", "WB03", "WB04"]
            ids: ["0" , "1", "2", "3"]
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_plexon
    >>> recording = read_plexon(file_path=r'my_data.plx')
  __init__(self, file_path: 'str | Path', stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = True)

Class: PlexonSortingExtractor
  Docstring:
    Class for reading plexon spiking data (.plx files).
    
    Based on :py:class:`neo.rawio.PlexonRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
  __init__(self, file_path)

Class: SHYBRIDRecordingExtractor
  Docstring:
    Load SHYBRID format data as a recording extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the SHYBRID file.
    
    Returns
    -------
    extractor : SHYBRIDRecordingExtractor
        Loaded data.
  __init__(self, file_path)
  Method: write_recording(recording, save_path, initial_sorting_fn, dtype='float32', **job_kwargs)
    Docstring:
      Convert and save the recording extractor to SHYBRID format.
      
      Parameters
      ----------
      recording: RecordingExtractor
          The recording extractor to be converted and saved
      save_path: str
          Full path to desired target folder
      initial_sorting_fn: str
          Full path to the initial sorting csv file (can also be generated
          using write_sorting static method from the SHYBRIDSortingExtractor)
      dtype: dtype, default: float32
          Type of the saved data
      **write_binary_kwargs: keyword arguments for write_to_binary_dat_format() function

Class: SHYBRIDSortingExtractor
  Docstring:
    Load SHYBRID format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the SHYBRID file.
    sampling_frequency : int
        The sampling frequency.
    delimiter : str
        The delimiter to use for loading the file.
    
    Returns
    -------
    extractor : SHYBRIDSortingExtractor
        Loaded data.
  __init__(self, file_path, sampling_frequency, delimiter=',')
  Method: write_sorting(sorting, save_path)
    Docstring:
      Convert and save the sorting extractor to SHYBRID CSV format.
      
      Parameters
      ----------
      sorting : SortingExtractor
          The sorting extractor to be converted and saved.
      save_path : str
          Full path to the desired target folder.

Class: SinapsResearchPlatformH5RecordingExtractor
  Docstring:
    Recording extractor for the SiNAPS research platform system saved in HDF5 format.
    
    Parameters
    ----------
    file_path : str | Path
        Path to the SiNAPS .h5 file.
  __init__(self, file_path: 'str | Path')

Class: SinapsResearchPlatformRecordingExtractor
  Docstring:
    Recording extractor for the SiNAPS research platform system saved in binary format.
    
    Parameters
    ----------
    file_path : str | Path
        Path to the SiNAPS .bin file.
    stream_name : "filt" | "raw" | "aux", default: "filt"
        The stream name to extract.
        "filt" extracts the filtered data, "raw" extracts the raw data, and "aux" extracts the auxiliary data.
  __init__(self, file_path: 'str | Path', stream_name: 'str' = 'filt')

Class: Spike2RecordingExtractor
  Docstring:
    Class for reading spike2 smr files.
    smrx are not supported with this, prefer CedRecordingExtractor instead.
    
    Based on :py:class:`neo.rawio.Spike2RawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations=False, use_names_as_ids: 'bool' = False)

Class: SpikeGLXEventExtractor
  Docstring:
    Class for reading events saved on the event channel by SpikeGLX software.
    
    Parameters
    ----------
    folder_path: str
  __init__(self, folder_path, block_index=None)

Class: SpikeGLXRecordingExtractor
  Docstring:
    Class for reading data saved by SpikeGLX software.
    See https://billkarsh.github.io/SpikeGLX/
    
    Based on :py:class:`neo.rawio.SpikeGLXRawIO`
    
    Contrary to older verions, this reader is folder-based.
    If the folder contains several streams (e.g., "imec0.ap", "nidq" ,"imec0.lf"),
    then the stream has to be specified with "stream_id" or "stream_name".
    
    Parameters
    ----------
    folder_path : str
        The folder path to load the recordings from.
    load_sync_channel : bool default: False
        Whether or not to load the last channel in the stream, which is typically used for synchronization.
        If True, then the probe is not loaded.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
        For example, "imec0.ap", "nidq", or "imec0.lf".
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_spikeglx
    >>> recording = read_spikeglx(folder_path=r'path_to_folder_with_data', load_sync_channel=False)
    # we can load the sync channel, but then the probe is not loaded
    >>> recording = read_spikeglx(folder_path=r'pat_to_folder_with_data', load_sync_channel=True)
  __init__(self, folder_path, load_sync_channel=False, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: SpikeGadgetsRecordingExtractor
  Docstring:
    Class for reading rec files from spikegadgets.
    
    Based on :py:class:`neo.rawio.SpikeGadgetsRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_spikegadgets
    >>> recording = read_spikegadgets(file_path=r'my_data.rec')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: SpykingCircusSortingExtractor
  Docstring:
    Load SpykingCircus format data as a recording extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the SpykingCircus folder.
    
    Returns
    -------
    extractor : SpykingCircusSortingExtractor
        Loaded data.
  __init__(self, folder_path)

Class: TdtRecordingExtractor
  Docstring:
    Class for reading TDT folder.
    
    Based on :py:class:`neo.rawio.TdTRawIO`
    
    Parameters
    ----------
    folder_path : str
        The folder path to the tdt folder.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
  __init__(self, folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: TridesclousSortingExtractor
  Docstring:
    Load Tridesclous format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the Tridesclous folder.
    chan_grp : list or None, default: None
        The channel group(s) to load.
    
    Returns
    -------
    extractor : TridesclousSortingExtractor
        Loaded data.
  __init__(self, folder_path, chan_grp=None)

Class: WaveClusSnippetsExtractor
  Docstring:
    Abstract class representing several multichannel snippets.
  __init__(self, file_path)
  Method: write_snippets(snippets_extractor, save_file_path)
    Docstring:
      None

Class: WaveClusSortingExtractor
  Docstring:
    Load WaveClus format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the WaveClus file.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : WaveClusSortingExtractor
        Loaded data.
  __init__(self, file_path, keep_good_only=True)

Class: WhiteMatterRecordingExtractor
  Docstring:
    RecordingExtractor for the WhiteMatter binary format.
    
    The recording format is a raw binary file containing int16 data,
    with an 8-byte header offset.
    
    Parameters
    ----------
    file_path : Path
        Path to the binary file.
    sampling_frequency : float
        The sampling frequency.
    num_channels : int
        Number of channels in the recording.
    channel_ids : list or None, default: None
        A list of channel ids. If None, channel_ids = list(range(num_channels)).
    is_filtered : bool or None, default: None
        If True, the recording is assumed to be filtered. If None, `is_filtered` is not set.
  __init__(self, file_path: Union[str, pathlib.Path], sampling_frequency: float, num_channels: int, channel_ids: Optional[List] = None, is_filtered: Optional[bool] = None)

Class: YassSortingExtractor
  Docstring:
    Load YASS format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the ALF folder.
    
    Returns
    -------
    extractor : YassSortingExtractor
        Loaded data.
  __init__(self, folder_path)

Class: event_class
  Docstring:
    Class for reading events saved on the event channel by SpikeGLX software.
    
    Parameters
    ----------
    folder_path: str
  __init__(self, folder_path, block_index=None)

Function: get_neo_num_blocks(extractor_name, *args, **kwargs) -> 'int'
  Docstring:
    Returns the number of NEO blocks.
    For multi-block datasets, the `block_index` argument can be used to select
    which bloack to read with the `read_**extractor_name**()` function.
    
    
    Parameters
    ----------
    extractor_name : str
        The extractor name (available through the se.recording_extractor_full_dict).
    *args, **kwargs : arguments
        Extractor specific arguments. You can check extractor specific arguments with:
        `read_**extractor_name**?`
    
    Returns
    -------
    int
        Number of NEO blocks
    
    Note
    ----
    Most datasets contain a single block.

Function: get_neo_streams(extractor_name, *args, **kwargs)
  Docstring:
    Returns the NEO streams (stream names and stream ids) associated to a dataset.
    For multi-stream datasets, the `stream_id` or `stream_name` arguments can be used
    to select which stream to read with the `read_**extractor_name**()` function.
    
    Parameters
    ----------
    extractor_name : str
        The extractor name (available through the se.recording_extractor_full_dict).
    *args, **kwargs : arguments
        Extractor specific arguments. You can check extractor specific arguments with:
        `read_**extractor_name**?`
    
    
    Returns
    -------
    list
        List of NEO stream names
    list
        List of NEO stream ids

Function: get_neuropixels_channel_groups(num_channels=384, num_channels_per_adc=12)
  Docstring:
    Returns groups of simultaneously sampled channels on a Neuropixels probe.
    
    The Neuropixels ADC sampling pattern is as follows:
    
    Channels:   ADCs:
    |||         |||
    ...         ...
    26 27       2 3
    24 25       2 3
    22 23       0 1
    ...         ...
    2 3         0 1
    0 1         0 1 <-- even and odd channels are digitized by separate ADCs
    |||         |||
     V           V
    
    This information is needed to perform the preprocessing.common_reference operation
    on channels that are sampled synchronously.
    
    Parameters
    ----------
    num_channels : int, default: 384
        The total number of channels in a recording.
        All currently available Neuropixels variants have 384 channels.
    num_channels_per_adc : int, default: 12
        The number of channels per ADC on the probe.
        Neuropixels 1.0 probes have 32 ADCs, each handling 12 channels.
        Neuropixels 2.0 probes have 24 ADCs, each handling 16 channels.
    
    Returns
    -------
    groups : list
        A list of lists of simultaneously sampled channel indices

Function: get_neuropixels_sample_shifts(num_channels: 'int' = 384, num_channels_per_adc: 'int' = 12, num_cycles: 'Optional[int]' = None) -> 'np.ndarray'
  Docstring:
    Calculate the relative sampling phase (inter-sample shifts) for each channel
    in Neuropixels probes due to ADC multiplexing.
    
    Neuropixels probes sample channels sequentially through multiple ADCs,
    introducing slight temporal delays between channels within each sampling cycle.
    These inter-sample shifts are fractions of the sampling period and are crucial
    to consider during preprocessing steps, such as phase correction, to ensure
    accurate alignment of the recorded signals.
    
    This function computes these relative phase shifts, returning an array where
    each value represents the fractional delay (ranging from 0 to 1) for the
    corresponding channel.
    
    Parameters
    ----------
    num_channels : int, default: 384
        Total number of channels in the recording.
        Neuropixels probes typically have 384 channels.
    num_channels_per_adc : int, default: 12
        Number of channels assigned to each ADC on the probe.
        Neuropixels 1.0 probes have 32 ADCs, each handling 12 channels.
        Neuropixels 2.0 probes have 24 ADCs, each handling 16 channels.
    num_cycles : int or None, default: None
        Number of cycles in the ADC sampling sequence.
        Neuropixels 1.0 probes have 13 cycles for AP (action potential) signals
        and 12 for LFP (local field potential) signals.
        Neuropixels 2.0 probes have 16 cycles.
        If None, defaults to the value of `num_channels_per_adc`.
    
    Returns
    -------
    sample_shifts : np.ndarray
        Array of relative phase shifts for each channel, with values ranging from 0 to 1,
        representing the fractional delay within the sampling period due to sequential ADC sampling.

Class: read_alf_sorting
  Docstring:
    Load ALF format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the ALF folder.
    sampling_frequency : int, default: 30000
        The sampling frequency.
    
    Returns
    -------
    extractor : ALFSortingExtractor
        The loaded data.
  __init__(self, folder_path, sampling_frequency=30000)

Class: read_alphaomega
  Docstring:
    Class for reading from AlphaRS and AlphaLab SnR boards.
    
    Based on :py:class:`neo.rawio.AlphaOmegaRawIO`
    
    Parameters
    ----------
    folder_path : str or Path-like
        The folder path to the AlphaOmega recordings.
    lsx_files : list of strings or None, default: None
        A list of files that refers to mpx files to load.
    stream_id : {"RAW", "LFP", "SPK", "ACC", "AI", "UD"}, default: "RAW"
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_alphaomega
    >>> recording = read_alphaomega(folder_path="alphaomega_folder")
  __init__(self, folder_path, lsx_files=None, stream_id='RAW', stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_alphaomega_event
  Docstring:
    Class for reading events from AlphaOmega MPX file format
    
    Parameters
    ----------
    folder_path : str or Path-like
        The folder path to the AlphaOmega events.
  __init__(self, folder_path)

Class: read_axona
  Docstring:
    Class for reading Axona RAW format.
    
    Based on :py:class:`neo.rawio.AxonaRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_axona
    >>> recording = read_axona(file_path=r'my_data.set')
  __init__(self, file_path: 'str | Path', all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Function: read_bids(folder_path)
  Docstring:
    Load a BIDS folder of data into extractor objects.
    
    The following files are considered:
    
      * _channels.tsv
      * _contacts.tsv
      * _ephys.nwb
      * _probes.tsv
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the BIDS folder.
    
    Returns
    -------
    extractors : list of extractors
        The loaded data, with attached Probes.

Class: read_biocam
  Docstring:
    Class for reading data from a Biocam file from 3Brain.
    
    Based on :py:class:`neo.rawio.BiocamRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    mea_pitch : float, default: None
        The inter-electrode distance (pitch) between electrodes.
    electrode_width : float, default: None
        Width of the electrodes in um.
    fill_gaps_strategy: "zeros" | "synthetic_noise" | None, default: None
        The strategy to fill the gaps in the data when using event-based
        compression. If None and the file is event-based compressed,
        you need to specify a fill gaps strategy:
    
        * "zeros": the gaps are filled with unsigned 0s (2048). This value is the "0" of the unsigned 12 bits
                   representation of the data.
        * "synthetic_noise": the gaps are filled with synthetic noise.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, mea_pitch=None, electrode_width=None, fill_gaps_strategy=None, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_blackrock
  Docstring:
    Class for reading BlackRock data.
    
    Based on :py:class:`neo.rawio.BlackrockRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_blackrock_sorting
  Docstring:
    Class for reading BlackRock spiking data.
    
    Based on :py:class:`neo.rawio.BlackrockRawIO`
    
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from
    stream_id : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    stream_name : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    sampling_frequency : float, default: None
        The sampling frequency for the sorting extractor. When the signal data is available (.ncs) those files will be
        used to extract the frequency automatically. Otherwise, the sampling frequency needs to be specified for
        this extractor to be initialized.
    nsx_to_load : int | list | str, default: None
        IDs of nsX file from which to load data, e.g., if set to 5 only data from the ns5 file are loaded.
        If 'all', then all nsX will be loaded. If None, all nsX files will be loaded. If empty list, no nsX files will be loaded.
  __init__(self, file_path, stream_id: 'Optional[str]' = None, stream_name: 'Optional[str]' = None, sampling_frequency: 'Optional[float]' = None, nsx_to_load: 'Optional[int | list | str]' = None)

Class: read_cbin_ibl
  Docstring:
    Load IBL data as an extractor object.
    
    IBL have a custom format - compressed binary with spikeglx meta.
    
    The format is like spikeglx (have a meta file) but contains:
    
      * "cbin" file (instead of "bin")
      * "ch" file used by mtscomp for compression info
    
    Parameters
    ----------
    folder_path : str or Path
        Path to ibl folder.
    load_sync_channel : bool, default: False
        Load or not the last channel (sync).
        If not then the probe is loaded.
    stream_name : {"ap", "lp"}, default: "ap".
        Whether to load AP or LFP band, one
        of "ap" or "lp".
    cbin_file_path : str, Path or None, default None
        The cbin file of the recording. If None, searches in `folder_path` for file.
    cbin_file : str or None, default None
        (deprecated) The cbin file of the recording. If None, searches in `folder_path` for file.
    
    Returns
    -------
    recording : CompressedBinaryIblExtractor
        The loaded data.
  __init__(self, folder_path=None, load_sync_channel=False, stream_name='ap', cbin_file_path=None, cbin_file=None)

Class: read_ced
  Docstring:
    Class for reading smr/smrw CED file.
    
    Based on :py:class:`neo.rawio.CedRawIO` / sonpy
    
    Alternative to read_spike2 which does not handle smrx
    
    Parameters
    ----------
    file_path : str
        The file path to the smr or smrx file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_ced
    >>> recording = read_ced(file_path=r'my_data.smr')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_cellexplorer
  Docstring:
    Extracts spiking information from `.mat` file stored in the CellExplorer format.
    Spike times are stored in units of seconds so we transform them to units of samples.
    
    The newer version of the format is described here:
    https://cellexplorer.org/data-structure/
    
    Whereas the old format is described here:
    https://github.com/buzsakilab/buzcode/wiki/Data-Formatting-Standards
    
    Parameters
    ----------
    file_path: str | Path
        Path to `.mat` file containing spikes. Usually named `session_id.spikes.cellinfo.mat`
    sampling_frequency: float | None, default: None
        The sampling frequency of the data. If None, it will be extracted from the files.
    session_info_file_path: str | Path | None, default: None
        Path to the `sessionInfo.mat` file. If None, it will be inferred from the file_path.
  __init__(self, file_path: 'str | Path', sampling_frequency: 'float | None' = None, session_info_file_path: 'str | Path | None' = None)

Class: read_combinato
  Docstring:
    Load Combinato format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the Combinato folder.
    sampling_frequency : int, default: 30000
        The sampling frequency.
    user : str, default: "simple"
        The username that ran the sorting
    det_sign : "both", "pos", "neg", default: "both"
        Which sign was used for detection.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : CombinatoSortingExtractor
        The loaded data.
  __init__(self, folder_path, sampling_frequency=None, user='simple', det_sign='both', keep_good_only=True)

Class: read_edf
  Docstring:
    Class for reading EDF (European data format) folder.
    
    Based on :py:class:`neo.rawio.EDFRawIO`
    
    Parameters
    ----------
    file_path: str
        The file path to load the recordings from.
    stream_id: str, default: None
        If there are several streams, specify the stream id you want to load.
        For this neo reader streams are defined by their sampling frequency.
    stream_name: str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations: bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_hdsort
  Docstring:
    Load HDSort format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to HDSort mat file.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : HDSortSortingExtractor
        The loaded data.
  __init__(self, file_path, keep_good_only=True)

Class: read_herdingspikes
  Docstring:
    Load HerdingSpikes format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the ALF folder.
    load_unit_info : bool, default: True
        Whether to load the unit info from the file.
    
    Returns
    -------
    extractor : HerdingSpikesSortingExtractor
        The loaded data.
  __init__(self, file_path, load_unit_info=True)
  Method: load_unit_info(self)
    Docstring:
      if 'centres' in self._rf.keys() and len(self._spike_times) > 0:
          self._unit_locs = self._rf['centres'][()]  # cache for faster access
          for u_i, unit_id in enumerate(self._unit_ids):
              self.set_unit_property(unit_id, property_name='unit_location', value=self._unit_locs[u_i])
      inds = []  # get these only once
      for unit_id in self._unit_ids:
          inds.append(np.where(self._cluster_id == unit_id)[0])
      if 'data' in self._rf.keys() and len(self._spike_times) > 0:
          d = self._rf['data'][()]
          for i, unit_id in enumerate(self._unit_ids):
              self.set_unit_spike_features(unit_id, 'spike_location', d[:, inds[i]].T)
      if 'ch' in self._rf.keys() and len(self._spike_times) > 0:
          d = self._rf['ch'][()]
          for i, unit_id in enumerate(self._unit_ids):
              self.set_unit_spike_features(unit_id, 'max_channel', d[inds[i]])

Class: read_ibl_recording
  Docstring:
    Stream IBL data as an extractor object.
    
    Parameters
    ----------
    eid : str or None, default: None
        The session ID to extract recordings for.
        In ONE, this is sometimes referred to as the "eid".
        When doing a session lookup such as
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> sessions = one.alyx.rest("sessions", "list", tag="2022_Q2_IBL_et_al_RepeatedSite")
    
        each returned value in `sessions` refers to it as the "id".
    pid : str or None, default: None
        Probe insertion UUID in Alyx. To retrieve the PID from a session (or eid), use the following code:
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> pids, _ = one.eid2pid("session_eid")
        >>> pid = pids[0]
    
        Either `eid` or `pid` must be provided.
    stream_name : str
        The name of the stream to load for the session.
        These can be retrieved from calling `StreamingIblExtractor.get_stream_names(session="<your session ID>")`.
    load_sync_channel : bool, default: false
        Load or not the last channel (sync).
        If not then the probe is loaded.
    cache_folder : str or None, default: None
        The location to temporarily store chunks of data during streaming.
        The default uses the folder designated by ONE.alyx._par.CACHE_DIR / "cache", which is typically the designated
        "Downloads" folder on your operating system. As long as `remove_cached` is set to True, the only files that will
        persist in this folder are the metadata header files and the chunk of data being actively streamed and used in RAM.
    remove_cached : bool, default: True
        Whether or not to remove streamed data from the cache immediately after it is read.
        If you expect to reuse fetched data many times, and have the disk space available, it is recommended to set this to False.
    stream : bool, default: True
        Whether or not to stream the data.
    one : one.api.OneAlyx, default: None
        An instance of the ONE API to use for data loading.
        If not provided, a default instance is created using the default parameters.
        If you need to use a specific instance, you can create it using the ONE API and pass it here.
    
    Returns
    -------
    recording : IblStreamingRecordingExtractor
        The recording extractor which allows access to the traces.
  __init__(self, eid: 'str | None' = None, pid: 'str | None' = None, stream_name: 'str | None' = None, load_sync_channel: 'bool' = False, cache_folder: 'Optional[Path | str]' = None, remove_cached: 'bool' = True, stream: 'bool' = True, one: "'one.api.OneAlyx'" = None, stream_type: 'str | None' = None)
  Method: get_stream_names(eid: 'str', cache_folder: 'Optional[Union[Path, str]]' = None, one=None) -> 'List[str]'
    Docstring:
      Convenient retrieval of available stream names.
      
      Parameters
      ----------
      eid : str
          The experiment ID to extract recordings for.
          In ONE, this is sometimes referred to as the "eid".
          When doing a session lookup such as
      
          >>> from one.api import ONE
          >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
          >>> eids = one.alyx.rest("sessions", "list", tag="2022_Q2_IBL_et_al_RepeatedSite")
      
          each returned value in `eids` refers to it as the experiment "id".
      cache_folder : str or None, default: None
          The location to temporarily store chunks of data during streaming.
      one : one.api.OneAlyx, default: None
          An instance of the ONE API to use for data loading.
          If not provided, a default instance is created using the default parameters.
          If you need to use a specific instance, you can create it using the ONE API and pass it here.
      stream_type : "ap" | "lf" | None, default: None
          The stream type to load, required when pid is provided and stream_name is not.
      
      Returns
      -------
      stream_names : list of str
          List of stream names as expected by the `stream_name` argument for the class initialization.

Class: read_ibl_sorting
  Docstring:
    Load IBL data as a sorting extractor.
    
    Parameters
    ----------
    pid: str
        Probe insertion UUID in Alyx. To retrieve the PID from a session (or eid), use the following code:
    
        >>> from one.api import ONE
        >>> one = ONE(base_url="https://openalyx.internationalbrainlab.org", password="international", silent=True)
        >>> pids, _ = one.eid2pid("session_eid")
        >>> pid = pids[0]
    one: One | dict, required
        Instance of ONE.api or dict to use for data loading.
        For multi-processing applications, this can also be a dictionary of ONE.api arguments
        For example: one=dict(base_url='https://alyx.internationalbrainlab.org', mode='remote')
    good_clusters_only: bool, default: False
        If True, only load the good clusters
    load_unit_properties: bool, default: True
        If True, load the unit properties from the IBL database
    kwargs: dict, optional
        Additional keyword arguments to pass to the IBL SpikeSortingLoader constructor, such as `revision`.
    Returns
    -------
    extractor : IBLSortingExtractor
        The loaded data.
  __init__(self, pid: 'str', good_clusters_only: 'bool' = False, load_unit_properties: 'bool' = True, one=None, **kwargs)

Class: read_intan
  Docstring:
    Class for reading data from a intan board. Supports rhd and rhs format.
    
    Based on :py:class:`neo.rawio.IntanRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    ignore_integrity_checks : bool, default: False.
        If True, data that violates integrity assumptions will be loaded. At the moment the only integrity
        check we perform is that timestamps are continuous. Setting this to True will ignore this check and set
        the attribute `discontinuous_timestamps` to True in the underlying neo object.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        In Intan the ids provided by NeoRawIO are the hardware channel ids while the names are custom names given by
        the user
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_intan
    # intan amplifier data is stored in stream_id = '0'
    >>> recording = read_intan(file_path=r'my_data.rhd', stream_id='0')
    # intan has multi-file formats as well, but in this case our path should point to the header file 'info.rhd'
    >>> recording = read_intan(file_path=r'info.rhd', stream_id='0')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations=False, use_names_as_ids=False, ignore_integrity_checks: 'bool' = False)

Class: read_kilosort
  Docstring:
    Load Kilosort format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the output Phy folder (containing the params.py).
    keep_good_only : bool, default: True
        Whether to only keep good units.
        If True, only Kilosort-labeled 'good' units are returned.
    remove_empty_units : bool, default: True
        If True, empty units are removed from the sorting extractor.
    
    Returns
    -------
    extractor : KiloSortSortingExtractor
        The loaded Sorting object.
  __init__(self, folder_path: 'Path | str', keep_good_only: 'bool' = False, remove_empty_units: 'bool' = True)

Class: read_klusta
  Docstring:
    Load Klusta format data as a sorting extractor.
    
    Parameters
    ----------
    file_or_folder_path : str or Path
        Path to the ALF folder.
    exclude_cluster_groups : list or str, default: None
        Cluster groups to exclude (e.g. "noise" or ["noise", "mua"]).
    
    Returns
    -------
    extractor : KlustaSortingExtractor
        The loaded data.
  __init__(self, file_or_folder_path, exclude_cluster_groups=None)

Class: read_maxwell
  Docstring:
    Class for reading data from Maxwell device.
    It handles MaxOne (old and new format) and MaxTwo.
    
    Based on :py:class:`neo.rawio.MaxwellRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to the maxwell h5 file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
        For MaxTwo when there are several wells at the same time you
        need to specify stream_id='well000' or 'well0001', etc.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    rec_name : str, default: None
        When the file contains several recordings you need to specify the one
        you want to extract. (rec_name='rec0000').
    install_maxwell_plugin : bool, default: False
        If True, install the maxwell plugin for neo.
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, rec_name=None, install_maxwell_plugin=False, use_names_as_ids: 'bool' = False)
  Method: install_maxwell_plugin(self, force_download=False)
    Docstring:
      None

Class: read_maxwell_event
  Docstring:
    Class for reading TTL events from Maxwell files.
  __init__(self, file_path)

Class: read_mclust
  Docstring:
    Load MClust sorting solution as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to folder with t files.
    sampling_frequency : sampling frequency
        sampling frequency in Hz.
    sampling_frequency_raw: float or None, default: None
        Required to read files with raw formats. In that case, the samples are saved in the same
        unit as the input data
        Examples:
            - If raw time is in tens of ms sampling_frequency_raw=10000
            - If raw time is in samples sampling_frequency_raw=sampling_frequency
    Returns
    -------
    extractor : MClustSortingExtractor
        Loaded data.
  __init__(self, folder_path, sampling_frequency, sampling_frequency_raw=None)

Class: read_mcsh5
  Docstring:
    Load a MCS H5 file as a recording extractor.
    
    Parameters
    ----------
    file_path : str or Path
        The path to the MCS h5 file.
    stream_id : int, default: 0
        The stream ID to load.
    
    Returns
    -------
    recording : MCSH5RecordingExtractor
        The loaded data.
  __init__(self, file_path, stream_id=0)

Class: read_mcsraw
  Docstring:
    Class for reading data from "Raw" Multi Channel System (MCS) format.
    This format is NOT the native MCS format (.mcd).
    This format is a raw format with an internal binary header exported by the
    "MC_DataTool binary conversion" with the option header selected.
    
    Based on :py:class:`neo.rawio.RawMCSRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    block_index : int, default: None
        If there are several blocks, specify the block index you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations=False, use_names_as_ids: 'bool' = False)

Class: read_mda_recording
  Docstring:
    Load MDA format data as a recording extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the MDA folder.
    raw_fname : str, default: "raw.mda"
        File name of raw file
    params_fname : str, default: "params.json"
        File name of params file
    geom_fname : str, default: "geom.csv"
        File name of geom file
    
    Returns
    -------
    extractor : MdaRecordingExtractor
        The loaded data.
  __init__(self, folder_path, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv')
  Method: write_recording(recording, save_path, params={}, raw_fname='raw.mda', params_fname='params.json', geom_fname='geom.csv', dtype=None, verbose=False, **job_kwargs)
    Docstring:
      Write a recording to file in MDA format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor to be saved.
      save_path : str or Path
          The folder to save the Mda files.
      params : dictionary
          Dictionary with optional parameters to save metadata.
          Sampling frequency is appended to this dictionary.
      raw_fname : str, default: "raw.mda"
          File name of raw file
      params_fname : str, default: "params.json"
          File name of params file
      geom_fname : str, default: "geom.csv"
          File name of geom file
      dtype : dtype or None, default: None
          Data type to be used. If None dtype is same as recording traces.
      verbose : bool
          If True, shows progress bar when saving recording.
      **job_kwargs:
          Use by job_tools modules to set:
      
              * chunk_size or chunk_memory, or total_memory
              * n_jobs
              * progress_bar

Class: read_mda_sorting
  Docstring:
    Load MDA format data as a sorting extractor.
    
    NOTE: As in the MDA format, the max_channel property indexes the channels that are given as input
    to the sorter.
    If sorting was run on a subset of channels of the recording, then the max_channel values are
    based on that subset, so care must be taken when associating these values with a recording.
    If additional sorting segments are added to this sorting extractor after initialization,
    then max_channel will not be updated. The max_channel indices begin at 1.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the MDA file.
    sampling_frequency : int
        The sampling frequency.
    
    Returns
    -------
    extractor : MdaRecordingExtractor
        The loaded data.
  __init__(self, file_path, sampling_frequency)
  Method: write_sorting(sorting, save_path, write_primary_channels=False)
    Docstring:
      None

Function: read_mearec(file_path)
  Docstring:
    Read a MEArec file.
    
    Parameters
    ----------
    file_path : str or Path
        Path to MEArec h5 file
    
    Returns
    -------
    recording : MEArecRecordingExtractor
        The recording extractor object
    sorting : MEArecSortingExtractor
        The sorting extractor object

Class: read_neuralynx
  Docstring:
    Class for reading neuralynx folder
    
    Based on :py:class:`neo.rawio.NeuralynxRawIO`
    
    Parameters
    ----------
    folder_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    exclude_filename : list[str], default: None
        List of filename to exclude from the loading.
        For example, use `exclude_filename=["events.nev"]` to skip loading the event file.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    strict_gap_mode : bool, default: False
        See neo documentation.
        Detect gaps using strict mode or not.
        * strict_gap_mode = True then a gap is consider when timstamp difference between
        two consecutive data packets is more than one sample interval.
        * strict_gap_mode = False then a gap has an increased tolerance. Some new systems
        with different clocks need this option otherwise, too many gaps are detected
    
        Note that here the default is False contrary to neo.
  __init__(self, folder_path: 'str | Path', stream_id=None, stream_name=None, all_annotations=False, exclude_filename=None, strict_gap_mode=False, use_names_as_ids: 'bool' = False)

Class: read_neuralynx_sorting
  Docstring:
    Class for reading spike data from a folder with neuralynx spiking data (i.e .nse and .ntt formats).
    
    Based on :py:class:`neo.rawio.NeuralynxRawIO`
    
    Parameters
    ----------
    folder_path : str
        The file path to load the recordings from.
    sampling_frequency : float
        The sampling frequency for the spiking channels. When the signal data is available (.ncs) those files will be
        used to extract the frequency. Otherwise, the sampling frequency needs to be specified for this extractor.
    stream_id : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
    stream_name : str, default: None
        Used to extract information about the sampling frequency and t_start from the analog signal if provided.
  __init__(self, folder_path: 'str', sampling_frequency: 'Optional[float]' = None, stream_id: 'Optional[str]' = None, stream_name: 'Optional[str]' = None)

Class: read_neuroexplorer
  Docstring:
    Class for reading NEX (NeuroExplorer data format) files.
    
    Based on :py:class:`neo.rawio.NeuroExplorerRawIO`
    
    Importantly, at the moment, this recorder only extracts one channel of the recording.
    This is because the NeuroExplorerRawIO class does not support multi-channel recordings
    as in the NeuroExplorer format they might have different sampling rates.
    
    Consider extracting all the channels and then concatenating them with the aggregate_channels function.
    
    >>> from spikeinterface.extractors.neoextractors.neuroexplorer import NeuroExplorerRecordingExtractor
    >>> from spikeinterface.core import aggregate_channels
    >>>
    >>> file_path="/the/path/to/your/nex/file.nex"
    >>>
    >>> streams = NeuroExplorerRecordingExtractor.get_streams(file_path=file_path)
    >>> stream_names = streams[0]
    >>>
    >>> your_signal_stream_names = "Here goes the logic to filter from stream names the ones that you know have the same sampling rate and you want to aggregate"
    >>>
    >>> recording_list = [NeuroExplorerRecordingExtractor(file_path=file_path, stream_name=stream_name) for stream_name in your_signal_stream_names]
    >>> recording = aggregate_channels(recording_list)
    
    
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
        For this neo reader streams are defined by their sampling frequency.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_neuronexus
  Docstring:
    Class for reading data from NeuroNexus Allego.
    
    Based on :py:class:`neo.rawio.NeuronexusRawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path to the metadata .xdat.json file of an Allego session
    stream_id : str | None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str | None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        In Neuronexus the ids provided by NeoRawIO are the hardware channel ids stored as `ntv_chan_name` within
        the metada and the names are the `chan_names`
  __init__(self, file_path: 'str | Path', stream_id: 'str | None' = None, stream_name: 'str | None' = None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Function: read_neuroscope(file_path, stream_id=None, keep_mua_units=False, exclude_shanks=None, load_recording=True, load_sorting=False)
  Docstring:
    Read neuroscope recording and sorting.
    This function assumses that all .res and .clu files are in the same folder as
    the .xml file.
    
    Parameters
    ----------
    file_path : str
        The xml file.
    stream_id : str or None
        The stream id to load. If None, the first stream is loaded
    keep_mua_units : bool, default: False
        Optional. Whether or not to return sorted spikes from multi-unit activity
    exclude_shanks : list
        Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the
        final integer of all the .res. % i and .clu. % i pairs.
    load_recording : bool, default: True
        If True, the recording is loaded
    load_sorting : bool, default: False
        If True, the sorting is loaded

Class: read_neuroscope_recording
  Docstring:
    Class for reading data from neuroscope
    Ref: http://neuroscope.sourceforge.net
    
    Based on :py:class:`neo.rawio.NeuroScopeRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to the binary container usually a .dat, .lfp, .eeg extension.
    xml_file_path : str, default: None
        The path to the xml file. If None, the xml file is assumed to have the same name as the binary file.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, xml_file_path=None, stream_id=None, stream_name: 'bool' = None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_neuroscope_sorting
  Docstring:
    Extracts spiking information from an arbitrary number of .res.%i and .clu.%i files in the general folder path.
    
    The .res is a text file with a sorted list of spiketimes from all units displayed in sample (integer "%i") units.
    The .clu file is a file with one more row than the .res with the first row corresponding to the total number of
    unique ids in the file (and may exclude 0 & 1 from this count)
    with the rest of the rows indicating which unit id the corresponding entry in the .res file refers to.
    The group id is loaded as unit property "group".
    
    In the original Neuroscope format:
        Unit ID 0 is the cluster of unsorted spikes (noise).
        Unit ID 1 is a cluster of multi-unit spikes.
    
    The function defaults to returning multi-unit activity as the first index, and ignoring unsorted noise.
    To return only the fully sorted units, set keep_mua_units=False.
    
    The sorting extractor always returns unit IDs from 1, ..., number of chosen clusters.
    
    Parameters
    ----------
    folder_path : str
        Optional. Path to the collection of .res and .clu text files. Will auto-detect format.
    resfile_path : PathType
        Optional. Path to a particular .res text file. If given, only the single .res file
        (and the respective .clu file) are loaded
    clufile_path : PathType
        Optional. Path to a particular .clu text file. If given, only the single .clu file
        (and the respective .res file) are loaded
    keep_mua_units : bool, default: True
        Optional. Whether or not to return sorted spikes from multi-unit activity
    exclude_shanks : list
        Optional. List of indices to ignore. The set of all possible indices is chosen by default, extracted as the
        final integer of all the .res.%i and .clu.%i pairs.
    xml_file_path : PathType, default: None
        Path to the .xml file referenced by this sorting.
  __init__(self, folder_path: 'OptionalPathType' = None, resfile_path: 'OptionalPathType' = None, clufile_path: 'OptionalPathType' = None, keep_mua_units: 'bool' = True, exclude_shanks: 'Optional[list]' = None, xml_file_path: 'OptionalPathType' = None)

Class: read_nix
  Docstring:
    Class for reading Nix file
    
    Based on :py:class:`neo.rawio.NIXRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    block_index : int, default: None
        If there are several blocks, specify the block index you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, block_index=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Function: read_nwb(file_path, load_recording=True, load_sorting=False, electrical_series_path=None)
  Docstring:
    Reads NWB file into SpikeInterface extractors.
    
    Parameters
    ----------
    file_path : str or Path
        Path to NWB file.
    load_recording : bool, default: True
        If True, the recording object is loaded.
    load_sorting : bool, default: False
        If True, the recording object is loaded.
    electrical_series_path : str or None, default: None
        The name of the ElectricalSeries (if multiple ElectricalSeries are present)
    
    Returns
    -------
    extractors : extractor or tuple
        Single RecordingExtractor/SortingExtractor or tuple with both
        (depending on "load_recording"/"load_sorting") arguments.

Class: read_nwb_recording
  Docstring:
    Load an NWBFile as a RecordingExtractor.
    
    Parameters
    ----------
    file_path : str, Path, or None
        Path to the NWB file or an s3 URL. Use this parameter to specify the file location
        if not using the `file` parameter.
    electrical_series_name : str or None, default: None
        Deprecated, use `electrical_series_path` instead.
    electrical_series_path : str or None, default: None
        The name of the ElectricalSeries object within the NWB file. This parameter is crucial
        when the NWB file contains multiple ElectricalSeries objects. It helps in identifying
        which specific series to extract data from. If there is only one ElectricalSeries and
        this parameter is not set, that unique series will be used by default.
        If multiple ElectricalSeries are present and this parameter is not set, an error is raised.
        The `electrical_series_path` corresponds to the path within the NWB file, e.g.,
        'acquisition/MyElectricalSeries`.
    load_time_vector : bool, default: False
        If set to True, the time vector is also loaded into the recording object. Useful for
        cases where precise timing information is required.
    samples_for_rate_estimation : int, default: 1000
        The number of timestamp samples used for estimating the sampling rate. This is relevant
        when the 'rate' attribute is not available in the ElectricalSeries.
    stream_mode : "fsspec" | "remfile" | "zarr" | None, default: None
        Determines the streaming mode for reading the file. Use this for optimized reading from
        different sources, such as local disk or remote servers.
    load_channel_properties : bool, default: True
        If True, all the channel properties are loaded from the NWB file and stored as properties.
        For streaming purposes, it can be useful to set this to False to speed up streaming.
    file : file-like object or None, default: None
        A file-like object representing the NWB file. Use this parameter if you have an in-memory
        representation of the NWB file instead of a file path.
    cache : bool, default: False
        Indicates whether to cache the file locally when using streaming. Caching can improve performance for
        remote files.
    stream_cache_path : str, Path, or None, default: None
        Specifies the local path for caching the file. Relevant only if `cache` is True.
    storage_options : dict | None = None,
        These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
        This is only used on the "zarr" stream_mode.
    use_pynwb : bool, default: False
        Uses the pynwb library to read the NWB file. Setting this to False, the default, uses h5py
        to read the file. Using h5py can improve performance by bypassing some of the PyNWB validations.
    
    Returns
    -------
    recording : NwbRecordingExtractor
        The recording extractor for the NWB file.
    
    Examples
    --------
    Run on local file:
    
    >>> from spikeinterface.extractors.nwbextractors import NwbRecordingExtractor
    >>> rec = NwbRecordingExtractor(filepath)
    
    Run on s3 URL from the DANDI Archive:
    
    >>> from spikeinterface.extractors.nwbextractors import NwbRecordingExtractor
    >>> from dandi.dandiapi import DandiAPIClient
    >>>
    >>> # get s3 path
    >>> dandiset_id = "001054"
    >>> filepath = "sub-Dory/sub-Dory_ses-2020-09-14-004_ecephys.nwb"
    >>> with DandiAPIClient() as client:
    >>>     asset = client.get_dandiset(dandiset_id).get_asset_by_path(filepath)
    >>>     s3_url = asset.get_content_url(follow_redirects=1, strip_query=True)
    >>>
    >>> rec = NwbRecordingExtractor(s3_url, stream_mode="remfile")
  __init__(self, file_path: 'str | Path | None' = None, electrical_series_name: 'str | None' = None, load_time_vector: 'bool' = False, samples_for_rate_estimation: 'int' = 1000, stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, stream_cache_path: 'str | Path | None' = None, electrical_series_path: 'str | None' = None, load_channel_properties: 'bool' = True, *, file: 'BinaryIO | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)
  Method: fetch_available_electrical_series_paths(file_path: 'str | Path', stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, storage_options: 'dict | None' = None) -> 'list[str]'
    Docstring:
      Retrieves the paths to all ElectricalSeries objects within a neurodata file.
      
      Parameters
      ----------
      file_path : str | Path
          The path to the neurodata file to be analyzed.
      stream_mode : "fsspec" | "remfile" | "zarr" | None, optional
          Determines the streaming mode for reading the file. Use this for optimized reading from
          different sources, such as local disk or remote servers.
      storage_options : dict | None = None,
          These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
          This is only used on the "zarr" stream_mode.
      Returns
      -------
      list of str
          A list of paths to all ElectricalSeries objects found in the file.
      
      
      Notes
      -----
      The paths are returned as strings, and can be used to load the desired ElectricalSeries object.
      Examples of paths are:
          - "acquisition/ElectricalSeries1"
          - "acquisition/ElectricalSeries2"
          - "processing/ecephys/LFP/ElectricalSeries1"
          - "processing/my_custom_module/MyContainer/ElectricalSeries2"

Class: read_nwb_sorting
  Docstring:
    Load an NWBFile as a SortingExtractor.
    Parameters
    ----------
    file_path : str or Path
        Path to NWB file.
    electrical_series_path : str or None, default: None
        The name of the ElectricalSeries (if multiple ElectricalSeries are present).
    sampling_frequency : float or None, default: None
        The sampling frequency in Hz (required if no ElectricalSeries is available).
    unit_table_path : str or None, default: "units"
        The path of the unit table in the NWB file.
    samples_for_rate_estimation : int, default: 100000
        The number of timestamp samples to use to estimate the rate.
        Used if "rate" is not specified in the ElectricalSeries.
    stream_mode : "fsspec" | "remfile" | "zarr" | None, default: None
        The streaming mode to use. If None it assumes the file is on the local disk.
    stream_cache_path : str or Path or None, default: None
        Local path for caching. If None it uses the system temporary directory.
    load_unit_properties : bool, default: True
        If True, all the unit properties are loaded from the NWB file and stored as properties.
    t_start : float or None, default: None
        This is the time at which the corresponding ElectricalSeries start. NWB stores its spikes as times
        and the `t_start` is used to convert the times to seconds. Concrently, the returned frames are computed as:
    
        `frames = (times - t_start) * sampling_frequency`.
    
        As SpikeInterface always considers the first frame to be at the beginning of the recording independently
        of the `t_start`.
    
        When a `t_start` is not provided it will be inferred from the corresponding ElectricalSeries with name equal
        to `electrical_series_path`. The `t_start` then will be either the `ElectricalSeries.starting_time` or the
        first timestamp in the `ElectricalSeries.timestamps`.
    cache : bool, default: False
        If True, the file is cached in the file passed to stream_cache_path
        if False, the file is not cached.
    storage_options : dict | None = None,
        These are the additional kwargs (e.g. AWS credentials) that are passed to the zarr.open convenience function.
        This is only used on the "zarr" stream_mode.
    use_pynwb : bool, default: False
        Uses the pynwb library to read the NWB file. Setting this to False, the default, uses h5py
        to read the file. Using h5py can improve performance by bypassing some of the PyNWB validations.
    
    Returns
    -------
    sorting : NwbSortingExtractor
        The sorting extractor for the NWB file.
  __init__(self, file_path: 'str | Path', electrical_series_path: 'str | None' = None, sampling_frequency: 'float | None' = None, samples_for_rate_estimation: 'int' = 1000, stream_mode: 'str | None' = None, stream_cache_path: 'str | Path | None' = None, load_unit_properties: 'bool' = True, unit_table_path: 'str' = 'units', *, t_start: 'float | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)

Class: read_nwb_timeseries
  Docstring:
    Load a TimeSeries from an NWBFile as a RecordingExtractor.
    
    Parameters
    ----------
    file_path : str | Path | None
        Path to NWB file or an s3 URL. Use this parameter to specify the file location
        if not using the `file` parameter.
    timeseries_path : str | None
        The path to the TimeSeries object within the NWB file. This parameter is required
        when the NWB file contains multiple TimeSeries objects. The path corresponds to
        the location within the NWB file hierarchy, e.g. 'acquisition/MyTimeSeries'.
    load_time_vector : bool, default: False
        If True, the time vector is loaded into the recording object. Useful when
        precise timing information is needed.
    samples_for_rate_estimation : int, default: 1000
        The number of timestamps used for estimating the sampling rate when
        timestamps are used instead of a fixed rate.
    stream_mode : Literal["fsspec", "remfile", "zarr"] | None, default: None
        Determines the streaming mode for reading the file.
    file : BinaryIO | None, default: None
        A file-like object representing the NWB file. Use this parameter if you have
        an in-memory representation of the NWB file instead of a file path given by `file_path`.
    cache : bool, default: False
        If True, the file is cached locally when using streaming.
    stream_cache_path : str | Path | None, default: None
        Local path for caching. Only used if `cache` is True.
    storage_options : dict | None, default: None
        Additional kwargs (e.g. AWS credentials) passed to zarr.open. Only used with
        "zarr" stream_mode.
    use_pynwb : bool, default: False
        If True, uses pynwb library to read the NWB file. Default False uses h5py/zarr
        directly for better performance.
    
    Returns
    -------
    recording : NwbTimeSeriesExtractor
        A recording extractor containing the TimeSeries data.
  __init__(self, file_path: 'str | Path | None' = None, timeseries_path: 'str | None' = None, load_time_vector: 'bool' = False, samples_for_rate_estimation: 'int' = 1000, stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, stream_cache_path: 'str | Path | None' = None, *, file: 'BinaryIO | None' = None, cache: 'bool' = False, storage_options: 'dict | None' = None, use_pynwb: 'bool' = False)
  Method: fetch_available_timeseries_paths(file_path: 'str | Path', stream_mode: "Optional[Literal['fsspec', 'remfile', 'zarr']]" = None, storage_options: 'dict | None' = None) -> 'list[str]'
    Docstring:
      Get paths to all TimeSeries objects in a neurodata file.
      
      Parameters
      ----------
      file_path : str | Path
          Path to the NWB file.
      stream_mode : str | None
          Streaming mode for reading remote files.
      storage_options : dict | None
          Additional options for zarr storage.
      
      Returns
      -------
      list[str]
          List of paths to TimeSeries objects.

Function: read_openephys(folder_path, **kwargs)
  Docstring:
    Read "legacy" or "binary" Open Ephys formats
    
    Parameters
    ----------
    folder_path : str or Path
        Path to openephys folder
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
    all_annotations : bool, default: False
        Load exhaustively all annotation from neo
    load_sync_channel : bool, default: False
        If False (default) and a SYNC channel is present (e.g. Neuropixels), this is not loaded.
        If True, the SYNC channel is loaded and can be accessed in the analog signals.
        For open ephsy binary format only
    load_sync_timestamps : bool, default: False
        If True, the synchronized_timestamps are loaded and set as times to the recording.
        If False (default), only the t_start and sampling rate are set, and timestamps are assumed
        to be uniform and linearly increasing.
        For open ephsy binary format only
    experiment_names : str, list, or None, default: None
        If multiple experiments are available, this argument allows users to select one
        or more experiments. If None, all experiements are loaded as blocks.
        E.g. `experiment_names="experiment2"`, `experiment_names=["experiment1", "experiment2"]`
        For open ephsy binary format only
    ignore_timestamps_errors : bool, default: False
        Ignore the discontinuous timestamps errors in neo
        For open ephsy legacy format only
    
    
    Returns
    -------
    recording : OpenEphysLegacyRecordingExtractor or OpenEphysBinaryExtractor

Function: read_openephys_event(folder_path, block_index=None)
  Docstring:
    Read Open Ephys events from "binary" format.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to openephys folder
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load.
    
    Returns
    -------
    event : OpenEphysBinaryEventExtractor

Class: read_phy
  Docstring:
    Load Phy format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the output Phy folder (containing the params.py).
    exclude_cluster_groups : list or str, default: None
        Cluster groups to exclude (e.g. "noise" or ["noise", "mua"]).
    load_all_cluster_properties : bool, default: True
        If True, all cluster properties are loaded from the tsv/csv files.
    
    Returns
    -------
    extractor : PhySortingExtractor
        The loaded Sorting object.
  __init__(self, folder_path: 'Path | str', exclude_cluster_groups: 'Optional[list[str] | str]' = None, load_all_cluster_properties: 'bool' = True)

Class: read_plexon
  Docstring:
    Class for reading plexon plx files.
    
    Based on :py:class:`neo.rawio.PlexonRawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: True
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
        Example for wideband signals:
            names: ["WB01", "WB02", "WB03", "WB04"]
            ids: ["0" , "1", "2", "3"]
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_plexon
    >>> recording = read_plexon(file_path=r'my_data.plx')
  __init__(self, file_path: 'str | Path', stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = True)

Class: read_plexon2
  Docstring:
    Class for reading plexon pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    file_path : str | Path
        The file path of the plexon2 file. It should have the .pl2 extension.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    use_names_as_ids : bool, default: True
        If True, the names of the signals are used as channel ids. If False, the channel ids are a combination of the
        source id and the channel index.
    
        Example for wideband signals:
            names: ["WB01", "WB02", "WB03", "WB04"]
            ids: ["source3.1" , "source3.2", "source3.3", "source3.4"]
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    reading_attempts : int, default: 25
        Number of attempts to read the file before raising an error
        This opening process is somewhat unreliable and might fail occasionally. Adjust this higher
        if you encounter problems in opening the file.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_plexon2
    >>> recording = read_plexon2(file_path=r'my_data.pl2')
  __init__(self, file_path, stream_id=None, stream_name=None, use_names_as_ids=True, all_annotations=False, reading_attempts: 'int' = 25)

Class: read_plexon2_event
  Docstring:
    Class for reading plexon spiking data from .pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    folder_path : str
  __init__(self, folder_path, block_index=None)

Class: read_plexon2_sorting
  Docstring:
    Class for reading plexon spiking data from .pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    sampling_frequency : float, default: None
        The sampling frequency of the sorting (required for multiple streams with different sampling frequencies).
  __init__(self, file_path, sampling_frequency=None)

Class: read_plexon_sorting
  Docstring:
    Class for reading plexon spiking data (.plx files).
    
    Based on :py:class:`neo.rawio.PlexonRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
  __init__(self, file_path)

Class: read_shybrid_recording
  Docstring:
    Load SHYBRID format data as a recording extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the SHYBRID file.
    
    Returns
    -------
    extractor : SHYBRIDRecordingExtractor
        Loaded data.
  __init__(self, file_path)
  Method: write_recording(recording, save_path, initial_sorting_fn, dtype='float32', **job_kwargs)
    Docstring:
      Convert and save the recording extractor to SHYBRID format.
      
      Parameters
      ----------
      recording: RecordingExtractor
          The recording extractor to be converted and saved
      save_path: str
          Full path to desired target folder
      initial_sorting_fn: str
          Full path to the initial sorting csv file (can also be generated
          using write_sorting static method from the SHYBRIDSortingExtractor)
      dtype: dtype, default: float32
          Type of the saved data
      **write_binary_kwargs: keyword arguments for write_to_binary_dat_format() function

Class: read_shybrid_sorting
  Docstring:
    Load SHYBRID format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the SHYBRID file.
    sampling_frequency : int
        The sampling frequency.
    delimiter : str
        The delimiter to use for loading the file.
    
    Returns
    -------
    extractor : SHYBRIDSortingExtractor
        Loaded data.
  __init__(self, file_path, sampling_frequency, delimiter=',')
  Method: write_sorting(sorting, save_path)
    Docstring:
      Convert and save the sorting extractor to SHYBRID CSV format.
      
      Parameters
      ----------
      sorting : SortingExtractor
          The sorting extractor to be converted and saved.
      save_path : str
          Full path to the desired target folder.

Class: read_sinaps_research_platform
  Docstring:
    Recording extractor for the SiNAPS research platform system saved in binary format.
    
    Parameters
    ----------
    file_path : str | Path
        Path to the SiNAPS .bin file.
    stream_name : "filt" | "raw" | "aux", default: "filt"
        The stream name to extract.
        "filt" extracts the filtered data, "raw" extracts the raw data, and "aux" extracts the auxiliary data.
  __init__(self, file_path: 'str | Path', stream_name: 'str' = 'filt')

Class: read_sinaps_research_platform_h5
  Docstring:
    Recording extractor for the SiNAPS research platform system saved in HDF5 format.
    
    Parameters
    ----------
    file_path : str | Path
        Path to the SiNAPS .h5 file.
  __init__(self, file_path: 'str | Path')

Class: read_spike2
  Docstring:
    Class for reading spike2 smr files.
    smrx are not supported with this, prefer CedRecordingExtractor instead.
    
    Based on :py:class:`neo.rawio.Spike2RawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations=False, use_names_as_ids: 'bool' = False)

Class: read_spikegadgets
  Docstring:
    Class for reading rec files from spikegadgets.
    
    Based on :py:class:`neo.rawio.SpikeGadgetsRawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_spikegadgets
    >>> recording = read_spikegadgets(file_path=r'my_data.rec')
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_spikeglx
  Docstring:
    Class for reading data saved by SpikeGLX software.
    See https://billkarsh.github.io/SpikeGLX/
    
    Based on :py:class:`neo.rawio.SpikeGLXRawIO`
    
    Contrary to older verions, this reader is folder-based.
    If the folder contains several streams (e.g., "imec0.ap", "nidq" ,"imec0.lf"),
    then the stream has to be specified with "stream_id" or "stream_name".
    
    Parameters
    ----------
    folder_path : str
        The folder path to load the recordings from.
    load_sync_channel : bool default: False
        Whether or not to load the last channel in the stream, which is typically used for synchronization.
        If True, then the probe is not loaded.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
        For example, "imec0.ap", "nidq", or "imec0.lf".
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    
    Examples
    --------
    >>> from spikeinterface.extractors import read_spikeglx
    >>> recording = read_spikeglx(folder_path=r'path_to_folder_with_data', load_sync_channel=False)
    # we can load the sync channel, but then the probe is not loaded
    >>> recording = read_spikeglx(folder_path=r'pat_to_folder_with_data', load_sync_channel=True)
  __init__(self, folder_path, load_sync_channel=False, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Function: read_spikeglx_event(folder_path, block_index=None)
  Docstring:
    Read SpikeGLX events
    
    Parameters
    ----------
    folder_path: str or Path
        Path to openephys folder
    block_index: int, default: None
        If there are several blocks (experiments), specify the block index you want to load.
    
    Returns
    -------
    event: SpikeGLXEventExtractor

Class: read_spykingcircus
  Docstring:
    Load SpykingCircus format data as a recording extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the SpykingCircus folder.
    
    Returns
    -------
    extractor : SpykingCircusSortingExtractor
        Loaded data.
  __init__(self, folder_path)

Class: read_tdt
  Docstring:
    Class for reading TDT folder.
    
    Based on :py:class:`neo.rawio.TdTRawIO`
    
    Parameters
    ----------
    folder_path : str
        The folder path to the tdt folder.
    stream_id : str or None, default: None
        If there are several streams, specify the stream id you want to load.
    stream_name : str or None, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
    block_index : int, default: None
        If there are several blocks (experiments), specify the block index you want to load
  __init__(self, folder_path, stream_id=None, stream_name=None, block_index=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: read_tridesclous
  Docstring:
    Load Tridesclous format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the Tridesclous folder.
    chan_grp : list or None, default: None
        The channel group(s) to load.
    
    Returns
    -------
    extractor : TridesclousSortingExtractor
        Loaded data.
  __init__(self, folder_path, chan_grp=None)

Class: read_waveclus
  Docstring:
    Load WaveClus format data as a sorting extractor.
    
    Parameters
    ----------
    file_path : str or Path
        Path to the WaveClus file.
    keep_good_only : bool, default: True
        Whether to only keep good units.
    
    Returns
    -------
    extractor : WaveClusSortingExtractor
        Loaded data.
  __init__(self, file_path, keep_good_only=True)

Class: read_waveclus_snippets
  Docstring:
    Abstract class representing several multichannel snippets.
  __init__(self, file_path)
  Method: write_snippets(snippets_extractor, save_file_path)
    Docstring:
      None

Class: read_yass
  Docstring:
    Load YASS format data as a sorting extractor.
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the ALF folder.
    
    Returns
    -------
    extractor : YassSortingExtractor
        Loaded data.
  __init__(self, folder_path)

Class: rec_class
  Docstring:
    Class for reading NEX (NeuroExplorer data format) files.
    
    Based on :py:class:`neo.rawio.NeuroExplorerRawIO`
    
    Importantly, at the moment, this recorder only extracts one channel of the recording.
    This is because the NeuroExplorerRawIO class does not support multi-channel recordings
    as in the NeuroExplorer format they might have different sampling rates.
    
    Consider extracting all the channels and then concatenating them with the aggregate_channels function.
    
    >>> from spikeinterface.extractors.neoextractors.neuroexplorer import NeuroExplorerRecordingExtractor
    >>> from spikeinterface.core import aggregate_channels
    >>>
    >>> file_path="/the/path/to/your/nex/file.nex"
    >>>
    >>> streams = NeuroExplorerRecordingExtractor.get_streams(file_path=file_path)
    >>> stream_names = streams[0]
    >>>
    >>> your_signal_stream_names = "Here goes the logic to filter from stream names the ones that you know have the same sampling rate and you want to aggregate"
    >>>
    >>> recording_list = [NeuroExplorerRecordingExtractor(file_path=file_path, stream_name=stream_name) for stream_name in your_signal_stream_names]
    >>> recording = aggregate_channels(recording_list)
    
    
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    stream_id : str, default: None
        If there are several streams, specify the stream id you want to load.
        For this neo reader streams are defined by their sampling frequency.
    stream_name : str, default: None
        If there are several streams, specify the stream name you want to load.
    all_annotations : bool, default: False
        Load exhaustively all annotations from neo.
    use_names_as_ids : bool, default: False
        Determines the format of the channel IDs used by the extractor. If set to True, the channel IDs will be the
        names from NeoRawIO. If set to False, the channel IDs will be the ids provided by NeoRawIO.
  __init__(self, file_path, stream_id=None, stream_name=None, all_annotations: 'bool' = False, use_names_as_ids: 'bool' = False)

Class: sort_class
  Docstring:
    Class for reading plexon spiking data from .pl2 files.
    
    Based on :py:class:`neo.rawio.Plexon2RawIO`
    
    Parameters
    ----------
    file_path : str
        The file path to load the recordings from.
    sampling_frequency : float, default: None
        The sampling frequency of the sorting (required for multiple streams with different sampling frequencies).
  __init__(self, file_path, sampling_frequency=None)

Function: toy_example(duration=10, num_channels=4, num_units=10, sampling_frequency=30000.0, num_segments=2, average_peak_amplitude=-100, upsample_factor=None, contact_spacing_um=40.0, num_columns=1, spike_times=None, spike_labels=None, firing_rate=3.0, seed=None)
  Docstring:
    Returns a generated dataset with "toy" units and spikes on top on white noise.
    This is useful to test api, algos, postprocessing and visualization without any downloading.
    
    This a rewrite (with the lazy approach) of the old spikeinterface.extractor.toy_example() which itself was also
    a rewrite from the very old spikeextractor.toy_example() (from Jeremy Magland).
    In this new version, the recording is totally lazy and so it does not use disk space or memory.
    It internally uses NoiseGeneratorRecording + generate_templates + InjectTemplatesRecording.
    
    For better control, you should use the  `generate_ground_truth_recording()`, but provides better control over
    the parameters.
    
    Parameters
    ----------
    duration : float or list[float], default: 10
        Duration in seconds. If a list is provided, it will be the duration of each segment.
    num_channels : int, default: 4
        Number of channels
    num_units : int, default: 10
        Number of units
    sampling_frequency : float, default: 30000
        Sampling frequency
    num_segments : int, default: 2
        Number of segments.
    spike_times : np.array or list[nparray] or None, default: None
        Spike time in the recording
    spike_labels : np.array or list[nparray] or None, default: None
        Cluster label for each spike time (needs to specified both together).
    firing_rate : float, default: 3.0
        The firing rate for the units (in Hz)
    seed : int or None, default: None
        Seed for random initialization.
    upsample_factor : None or int, default: None
        An upsampling factor, used only when templates are not provided.
    num_columns : int, default:  1
        Number of columns in probe.
    average_peak_amplitude : float, default: -100
        Average peak amplitude of generated templates.
    contact_spacing_um : float, default: 40.0
        Spacing between probe contacts in micrometers.
    
    Returns
    -------
    recording : RecordingExtractor
        The output recording extractor.
    sorting : SortingExtractor
        The output sorting extractor.

==== DELIM ====
API for module: spikeinterface.exporters

Function: export_report(sorting_analyzer, output_folder, remove_if_exists=False, format='png', show_figures=False, peak_sign='neg', force_computation=False, **job_kwargs)
  Docstring:
    Exports a SI spike sorting report. The report includes summary figures of the spike sorting output.
    What is plotted depends on what has been calculated. Unit locations and unit waveforms are always included.
    Unit waveform densities, correlograms and spike amplitudes are plotted if `waveforms`, `correlograms`,
    and `spike_amplitudes` have been computed for the given `sorting_analyzer`.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    output_folder : str
        The output folder where the report files are saved
    remove_if_exists : bool, default: False
        If True and the output folder exists, it is removed
    format : str, default: "png"
        The output figure format (any format handled by matplotlib)
    peak_sign : "neg" or "pos", default: "neg"
        used to compute amplitudes and metrics
    show_figures : bool, default: False
        If True, figures are shown. If False, figures are closed after saving
    force_computation :  bool, default: False
        Force or not some heavy computaion before exporting
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

Function: export_to_ibl_gui(sorting_analyzer: 'SortingAnalyzer', output_folder: 'str | Path', lfp_recording: 'BaseRecording | None' = None, rms_win_length_s=3, welch_win_length_samples=16384, psd_chunk_duration_s=1, psd_num_chunks=100, good_units_query: 'str | None' = 'amplitude_median < -40 and isi_violations_ratio < 0.5 and amplitude_cutoff < 0.2', remove_if_exists: 'bool' = False, verbose: 'bool' = True, **job_kwargs)
  Docstring:
    Exports a sorting analyzer to the format required by the `IBL alignment GUI <https://github.com/int-brain-lab/iblapps/wiki>`_.
    
    Parameters
    ----------
    analyzer: SortingAnalyzer
        The sorting analyzer object to use for spike information.
        Should also contain the pre-processed recording to use for AP-band data.
    output_folder: str | Path
        The output folder for the exports.
    lfp_recording: BaseRecording | None, default: None
        The pre-processed recording to use for LFP data. If None, the LFP data is not exported.
    rms_win_length_s: float, default: 3
        The window length in seconds for the RMS calculation (on the LFP data).
    welch_win_length_samples: int, default: 2^14
        The window length in samples for the Welch spectral density computation (on the LFP data).
    psd_chunk_duration_s: float, default: 1
        The chunk duration in seconds for the spectral density calculation (on the LFP data).
    psd_num_chunks: int, default: 100
        The number of chunks to use for the spectral density calculation (on the LFP data).
    remove_if_exists: bool, default: False
        If True and "output_folder" exists, it is removed and overwritten
    verbose: bool, default: True
        If True, output is verbose

Function: export_to_phy(sorting_analyzer: 'SortingAnalyzer', output_folder: 'str | Path', compute_pc_features: 'bool' = True, compute_amplitudes: 'bool' = True, sparsity: 'Optional[ChannelSparsity]' = None, copy_binary: 'bool' = True, remove_if_exists: 'bool' = False, template_mode: 'str' = 'average', add_quality_metrics: 'bool' = True, add_template_metrics: 'bool' = True, additional_properties: 'list | None' = None, dtype: 'Optional[npt.DTypeLike]' = None, verbose: 'bool' = True, use_relative_path: 'bool' = False, **job_kwargs)
  Docstring:
    Exports a sorting analyzer to the phy template-gui format.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    output_folder : str | Path
        The output folder where the phy template-gui files are saved
    compute_pc_features : bool, default: True
        If True, pc features are computed
    compute_amplitudes : bool, default: True
        If True, waveforms amplitudes are computed
    sparsity : ChannelSparsity or None, default: None
        The sparsity object
    copy_binary : bool, default: True
        If True, the recording is copied and saved in the phy "output_folder"
    remove_if_exists : bool, default: False
        If True and "output_folder" exists, it is removed and overwritten
    template_mode : str, default: "average"
        Parameter "mode" to be given to SortingAnalyzer.get_template()
    add_quality_metrics : bool, default: True
        If True, quality metrics (if computed) are saved as Phy tsv and will appear in the ClusterView.
    add_template_metrics : bool, default: True
        If True, template metrics (if computed) are saved as Phy tsv and will appear in the ClusterView.
    additional_properties : list | None, default: None
        List of additional properties to be saved as Phy tsv and will appear in the ClusterView.
    dtype : dtype or None, default: None
        Dtype to save binary data
    verbose : bool, default: True
        If True, output is verbose
    use_relative_path : bool, default: False
        If True and `copy_binary=True` saves the binary file `dat_path` in the `params.py` relative to `output_folder` (ie `dat_path=r"recording.dat"`). If `copy_binary=False`, then uses a path relative to the `output_folder`
        If False, uses an absolute path in the `params.py` (ie `dat_path=r"path/to/the/recording.dat"`)
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

==== DELIM ====
API for module: spikeinterface

Class: AnalyzerExtension
  Docstring:
    This the base class to extend the SortingAnalyzer.
    It can handle persistency to disk for any computations related to:
    
    For instance:
      * waveforms
      * principal components
      * spike amplitudes
      * quality metrics
    
    Possible extension can be registered on-the-fly at import time with register_result_extension() mechanism.
    It also enables any custom computation on top of the SortingAnalyzer to be implemented by the user.
    
    An extension needs to inherit from this class and implement some attributes and abstract methods:
    
      * extension_name
      * depend_on
      * need_recording
      * use_nodepipeline
      * nodepipeline_variables only if use_nodepipeline=True
      * need_job_kwargs
      * _set_params()
      * _run()
      * _select_extension_data()
      * _merge_extension_data()
      * _get_data()
    
    The subclass must also set an `extension_name` class attribute which is not None by default.
    
    The subclass must also hanle an attribute `data` which is a dict contain the results after the `run()`.
    
    All AnalyzerExtension will have a function associate for instance (this use the function_factory):
    compute_unit_location(sorting_analyzer, ...) will be equivalent to sorting_analyzer.compute("unit_location", ...)
  __init__(self, sorting_analyzer)
  Method: copy(self, new_sorting_analyzer, unit_ids=None)
    Docstring:
      None
  Method: delete(self)
    Docstring:
      Delete the extension from the folder or zarr and from the dict.
  Method: get_data(self, *args, **kwargs)
    Docstring:
      None
  Method: get_pipeline_nodes(self)
    Docstring:
      None
  Method: load_data(self)
    Docstring:
      None
  Method: load_params(self)
    Docstring:
      None
  Method: load_run_info(self)
    Docstring:
      None
  Method: merge(self, new_sorting_analyzer, merge_unit_groups, new_unit_ids, keep_mask=None, verbose=False, **job_kwargs)
    Docstring:
      None
  Method: reset(self)
    Docstring:
      Reset the extension.
      Delete the sub folder and create a new empty one.
  Method: run(self, save=True, **kwargs)
    Docstring:
      None
  Method: save(self)
    Docstring:
      None
  Method: set_params(self, save=True, **params)
    Docstring:
      Set parameters for the extension and
      make it persistent in json.

Class: AppendSegmentRecording
  Docstring:
    Takes as input a list of parent recordings each with multiple segments and
    returns a single multi-segment recording that "appends" all segments from
    all parent recordings.
    
    For instance, given one recording with 2 segments and one recording with 3 segments,
    this class will give one recording with 5 segments
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, sampling_frequency_max_diff=0)

Class: AppendSegmentSorting
  Docstring:
    Return a sorting that "append" all segments from all sorting
    into one sorting multi segment.
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, sampling_frequency_max_diff=0)

Class: BaseEvent
  Docstring:
    Abstract class representing events.
    
    
    Parameters
    ----------
    channel_ids : list or np.array
        The channel ids
    structured_dtype : dtype or dict
        The dtype of the events. If dict, each key is the channel_id and values must be
        the dtype of the channel (also structured). If dtype, each channel is assigned the
        same dtype.
        In case of structured dtypes, the "time" or "timestamp" field name must be present.
  __init__(self, channel_ids, structured_dtype)
  Method: add_event_segment(self, event_segment)
    Docstring:
      None
  Method: get_dtype(self, channel_id)
    Docstring:
      None
  Method: get_event_times(self, channel_id: 'int | str | None' = None, segment_index: 'int | None' = None, start_time: 'float | None' = None, end_time: 'float | None' = None)
    Docstring:
      Return events timestamps of a channel in seconds.
      
      Parameters
      ----------
      channel_id : int | str | None, default: None
          The event channel id
      segment_index : int | None, default: None
          The segment index, required for multi-segment objects
      start_time : float | None, default: None
          The start time in seconds
      end_time : float | None, default: None
          The end time in seconds
      
      Returns
      -------
      np.array
          1d array of timestamps for the event channel
  Method: get_events(self, channel_id: 'int | str | None' = None, segment_index: 'int | None' = None, start_time: 'float | None' = None, end_time: 'float | None' = None)
    Docstring:
      Return events of a channel in its native structured type.
      
      Parameters
      ----------
      channel_id : int | str | None, default: None
          The event channel id
      segment_index : int | None, default: None
          The segment index, required for multi-segment objects
      start_time : float | None, default: None
          The start time in seconds
      end_time : float | None, default: None
          The end time in seconds
      
      Returns
      -------
      np.array
          Structured np.array of dtype `get_dtype(channel_id)`
  Method: get_num_channels(self)
    Docstring:
      None
  Method: get_num_segments(self)
    Docstring:
      None

Class: BaseEventSegment
  Docstring:
    Abstract class representing several units and relative spiketrain inside a segment.
  __init__(self)
  Method: get_event_times(self, channel_id: 'int | str', start_time: 'float', end_time: 'float') -> 'np.ndarray'
    Docstring:
      Returns event timestamps of a channel in seconds
      Parameters
      ----------
      channel_id : int | str
          The event channel id
      start_time : float
          The start time in seconds
      end_time : float
          The end time in seconds
      
      Returns
      -------
      np.array
          1d array of timestamps for the event channel
  Method: get_events(self, channel_id, start_time, end_time)
    Docstring:
      None

Class: BaseRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, sampling_frequency: 'float', channel_ids: 'list', dtype)
  Method: add_recording_segment(self, recording_segment)
    Docstring:
      Adds a recording segment.
      
      Parameters
      ----------
      recording_segment : BaseRecordingSegment
          The recording segment to add
  Method: astype(self, dtype, round: 'bool | None' = None)
    Docstring:
      None
  Method: binary_compatible_with(self, dtype=None, time_axis=None, file_paths_length=None, file_offset=None, file_suffix=None, file_paths_lenght=None)
    Docstring:
      Check is the recording is binary compatible with some constrain on
      
        * dtype
        * tim_axis
        * len(file_paths)
        * file_offset
        * file_suffix
  Method: frame_slice(self, start_frame: 'int | None', end_frame: 'int | None') -> 'BaseRecording'
    Docstring:
      Returns a new recording with sliced frames. Note that this operation is not in place.
      
      Parameters
      ----------
      start_frame : int, optional
          The start frame, if not provided it is set to 0
      end_frame : int, optional
          The end frame, it not provided it is set to the total number of samples
      
      Returns
      -------
      BaseRecording
          A new recording object with only samples between start_frame and end_frame
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: get_channel_locations(self, channel_ids: 'list | np.ndarray | tuple | None' = None, axes: "'xy' | 'yz' | 'xz' | 'xyz'" = 'xy') -> 'np.ndarray'
    Docstring:
      Get the physical locations of specified channels.
      
      Parameters
      ----------
      channel_ids : array-like, optional
          The IDs of the channels for which to retrieve locations. If None, retrieves locations
          for all available channels. Default is None.
      axes : "xy" | "yz" | "xz" | "xyz", default: "xy"
          The spatial axes to return, specified as a string (e.g., "xy", "xyz"). Default is "xy".
      
      Returns
      -------
      np.ndarray
          A 2D or 3D array of shape (n_channels, n_dimensions) containing the locations of the channels.
          The number of dimensions depends on the `axes` argument (e.g., 2 for "xy", 3 for "xyz").
  Method: get_duration(self, segment_index=None) -> 'float'
    Docstring:
      Returns the duration in seconds.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The sample index to retrieve the duration for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the duration of the single segment
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_end_time(self, segment_index=None) -> 'float'
    Docstring:
      Get the stop time of the recording segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      float
          The stop time in seconds
  Method: get_memory_size(self, segment_index=None) -> 'int'
    Docstring:
      Returns the memory size of segment_index in bytes.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The index of the segment for which the memory size should be calculated.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the memory size of the single segment
      
      Returns
      -------
      int
          The memory size of the specified segment in bytes.
  Method: get_num_frames(self, segment_index: 'int | None' = None) -> 'int'
    Docstring:
      Returns the number of samples for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the number of samples in the segment
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_samples(self, segment_index: 'int | None' = None) -> 'int'
    Docstring:
      Returns the number of samples for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required, default: None
          With single segment recording returns the number of samples in the segment
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_segments(self) -> 'int'
    Docstring:
      Returns the number of segments.
      
      Returns
      -------
      int
          Number of segments in the recording
  Method: get_start_time(self, segment_index=None) -> 'float'
    Docstring:
      Get the start time of the recording segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      float
          The start time in seconds
  Method: get_time_info(self, segment_index=None) -> 'dict'
    Docstring:
      Retrieves the timing attributes for a given segment index. As with
      other recorders this method only needs a segment index in the case
      of multi-segment recordings.
      
      Returns
      -------
      dict
          A dictionary containing the following key-value pairs:
      
          - "sampling_frequency" : The sampling frequency of the RecordingSegment.
          - "t_start" : The start time of the RecordingSegment.
          - "time_vector" : The time vector of the RecordingSegment.
      
      Notes
      -----
      The keys are always present, but the values may be None.
  Method: get_times(self, segment_index=None) -> 'np.ndarray'
    Docstring:
      Get time vector for a recording segment.
      
      If the segment has a time_vector, then it is returned. Otherwise
      a time_vector is constructed on the fly with sampling frequency.
      If t_start is defined and the time vector is constructed on the fly,
      the first time will be t_start. Otherwise it will start from 0.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      np.array
          The 1d times array
  Method: get_total_duration(self) -> 'float'
    Docstring:
      Returns the total duration in seconds
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_total_memory_size(self) -> 'int'
    Docstring:
      Returns the sum in bytes of all the memory sizes of the segments.
      
      Returns
      -------
      int
          The total memory size in bytes for all segments.
  Method: get_total_samples(self) -> 'int'
    Docstring:
      Returns the sum of the number of samples in each segment.
      
      Returns
      -------
      int
          The total number of samples
  Method: get_traces(self, segment_index: 'int | None' = None, start_frame: 'int | None' = None, end_frame: 'int | None' = None, channel_ids: 'list | np.array | tuple | None' = None, order: "'C' | 'F' | None" = None, return_scaled: 'bool' = False, cast_unsigned: 'bool' = False) -> 'np.ndarray'
    Docstring:
      Returns traces from recording.
      
      Parameters
      ----------
      segment_index : int | None, default: None
          The segment index to get traces from. If recording is multi-segment, it is required, default: None
      start_frame : int | None, default: None
          The start frame. If None, 0 is used, default: None
      end_frame : int | None, default: None
          The end frame. If None, the number of samples in the segment is used, default: None
      channel_ids : list | np.array | tuple | None, default: None
          The channel ids. If None, all channels are used, default: None
      order : "C" | "F" | None, default: None
          The order of the traces ("C" | "F"). If None, traces are returned as they are
      return_scaled : bool, default: False
          If True and the recording has scaling (gain_to_uV and offset_to_uV properties),
          traces are scaled to uV
      cast_unsigned : bool, default: False
          If True and the traces are unsigned, they are cast to integer and centered
          (an offset of (2**nbits) is subtracted)
      
      Returns
      -------
      np.array
          The traces (num_samples, num_channels)
      
      Raises
      ------
      ValueError
          If return_scaled is True, but recording does not have scaled traces
  Method: has_scaled_traces(self) -> 'bool'
    Docstring:
      Checks if the recording has scaled traces
      
      Returns
      -------
      bool
          True if the recording has scaled traces, False otherwise
  Method: has_time_vector(self, segment_index=None)
    Docstring:
      Check if the segment of the recording has a time vector.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      
      Returns
      -------
      bool
          True if the recording has time vectors, False otherwise
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: rename_channels(self, new_channel_ids: 'list | np.array | tuple') -> "'BaseRecording'"
    Docstring:
      Returns a new recording object with renamed channel ids.
      
      Note that this method does not modify the current recording and instead returns a new recording object.
      
      Parameters
      ----------
      new_channel_ids : list or np.array or tuple
          The new channel ids. They are mapped positionally to the old channel ids.
  Method: reset_times(self)
    Docstring:
      Reset time information in-memory for all segments that have a time vector.
      If the timestamps come from a file, the files won't be modified. but only the in-memory
      attributes of the recording objects are deleted. Also `t_start` is set to None and the
      segment's sampling frequency is set to the recording's sampling frequency.
  Method: sample_index_to_time(self, sample_ind, segment_index=None)
    Docstring:
      Transform sample index into time in seconds
  Method: select_channels(self, channel_ids: 'list | np.array | tuple') -> "'BaseRecording'"
    Docstring:
      Returns a new recording object with a subset of channels.
      
      Note that this method does not modify the current recording and instead returns a new recording object.
      
      Parameters
      ----------
      channel_ids : list or np.array or tuple
          The channel ids to select.
  Method: set_times(self, times, segment_index=None, with_warning=True)
    Docstring:
      Set times for a recording segment.
      
      Parameters
      ----------
      times : 1d np.array
          The time vector
      segment_index : int or None, default: None
          The segment index (required for multi-segment)
      with_warning : bool, default: True
          If True, a warning is printed
  Method: shift_times(self, shift: 'int | float', segment_index: 'int | None' = None) -> 'None'
    Docstring:
      Shift all times by a scalar value.
      
      Parameters
      ----------
      shift : int | float
          The shift to apply. If positive, times will be increased by `shift`.
          e.g. shifting by 1 will be like the recording started 1 second later.
          If negative, the start time will be decreased i.e. as if the recording
          started earlier.
      
      segment_index : int | None
          The segment on which to shift the times.
          If `None`, all segments will be shifted.
  Method: time_slice(self, start_time: 'float | None', end_time: 'float | None') -> 'BaseRecording'
    Docstring:
      Returns a new recording object, restricted to the time interval [start_time, end_time].
      
      Parameters
      ----------
      start_time : float, optional
          The start time in seconds. If not provided it is set to 0.
      end_time : float, optional
          The end time in seconds. If not provided it is set to the total duration.
      
      Returns
      -------
      BaseRecording
          A new recording object with only samples between start_time and end_time
  Method: time_to_sample_index(self, time_s, segment_index=None)
    Docstring:
      None

Class: BaseRecordingSegment
  Docstring:
    Abstract class representing a multichannel timeseries, or block of raw ephys traces
  __init__(self, sampling_frequency=None, t_start=None, time_vector=None)
  Method: get_end_time(self) -> 'float'
    Docstring:
      None
  Method: get_num_samples(self) -> 'int'
    Docstring:
      Returns the number of samples in this signal segment
      
      Returns:
          SampleIndex : Number of samples in the signal segment
  Method: get_start_time(self) -> 'float'
    Docstring:
      None
  Method: get_times(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_times_kwargs(self) -> 'dict'
    Docstring:
      Retrieves the timing attributes characterizing a RecordingSegment
      
      Returns
      -------
      dict
          A dictionary containing the following key-value pairs:
      
          - "sampling_frequency" : The sampling frequency of the RecordingSegment.
          - "t_start" : The start time of the RecordingSegment.
          - "time_vector" : The time vector of the RecordingSegment.
      
      Notes
      -----
      The keys are always present, but the values may be None.
  Method: get_traces(self, start_frame: 'int | None' = None, end_frame: 'int | None' = None, channel_indices: 'list | np.array | tuple | None' = None) -> 'np.ndarray'
    Docstring:
      Return the raw traces, optionally for a subset of samples and/or channels
      
      Parameters
      ----------
      start_frame : int | None, default: None
          start sample index, or zero if None
      end_frame : int | None, default: None
          end_sample, or number of samples if None
      channel_indices : list | np.array | tuple | None, default: None
          Indices of channels to return, or all channels if None
      
      Returns
      -------
      traces : np.ndarray
          Array of traces, num_samples x num_channels
  Method: sample_index_to_time(self, sample_ind)
    Docstring:
      Transform sample index into time in seconds
  Method: time_to_sample_index(self, time_s)
    Docstring:
      Transform time in seconds into sample index

Class: BaseRecordingSnippets
  Docstring:
    Mixin that handles all probe and channel operations
  __init__(self, sampling_frequency: 'float', channel_ids: 'list[str, int]', dtype: 'np.dtype')
  Method: channel_slice(self, channel_ids, renamed_channel_ids=None)
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      renamed_channel_ids : np.array or list, default: None
          A list of renamed channels
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels
  Method: clear_channel_groups(self, channel_ids=None)
    Docstring:
      None
  Method: clear_channel_locations(self, channel_ids=None)
    Docstring:
      None
  Method: create_dummy_probe_from_locations(self, locations, shape='circle', shape_params={'radius': 1}, axes='xy')
    Docstring:
      Creates a "dummy" probe based on locations.
      
      Parameters
      ----------
      locations : np.array
          Array with channel locations (num_channels, ndim) [ndim can be 2 or 3]
      shape : str, default: "circle"
          Electrode shapes
      shape_params : dict, default: {"radius": 1}
          Shape parameters
      axes : str, default: "xy"
          If ndim is 3, indicates the axes that define the plane of the electrodes
      
      Returns
      -------
      probe : Probe
          The created probe
  Method: frame_slice(self, start_frame, end_frame)
    Docstring:
      Returns a new object with sliced frames.
      
      Parameters
      ----------
      start_frame : int
          The start frame
      end_frame : int
          The end frame
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced frames
  Method: get_channel_gains(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_groups(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_ids(self)
    Docstring:
      None
  Method: get_channel_locations(self, channel_ids=None, axes: 'str' = 'xy') -> 'np.ndarray'
    Docstring:
      None
  Method: get_channel_offsets(self, channel_ids=None)
    Docstring:
      None
  Method: get_channel_property(self, channel_id, key)
    Docstring:
      None
  Method: get_dtype(self)
    Docstring:
      None
  Method: get_num_channels(self)
    Docstring:
      None
  Method: get_probe(self)
    Docstring:
      None
  Method: get_probegroup(self)
    Docstring:
      None
  Method: get_probes(self)
    Docstring:
      None
  Method: get_sampling_frequency(self)
    Docstring:
      None
  Method: has_3d_locations(self) -> 'bool'
    Docstring:
      None
  Method: has_channel_location(self) -> 'bool'
    Docstring:
      None
  Method: has_probe(self) -> 'bool'
    Docstring:
      None
  Method: has_scaleable_traces(self) -> 'bool'
    Docstring:
      None
  Method: has_scaled(self)
    Docstring:
      None
  Method: is_filtered(self)
    Docstring:
      None
  Method: planarize(self, axes: 'str' = 'xy')
    Docstring:
      Returns a Recording with a 2D probe from one with a 3D probe
      
      Parameters
      ----------
      axes : "xy" | "yz" |"xz", default: "xy"
          The axes to keep
      
      Returns
      -------
      BaseRecording
          The recording with 2D positions
  Method: remove_channels(self, remove_channel_ids)
    Docstring:
      Returns a new object with removed channels.
      
      
      Parameters
      ----------
      remove_channel_ids : np.array or list
          The list of channels to remove
      
      Returns
      -------
      BaseRecordingSnippets
          The object with removed channels
  Method: select_channels(self, channel_ids)
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels
  Method: select_segments(self, segment_indices)
    Docstring:
      Return a new object with the segments specified by "segment_indices".
      
      Parameters
      ----------
      segment_indices : list of int
          List of segment indices to keep in the returned recording
      
      Returns
      -------
      BaseRecordingSnippets
          The onject with the selected segments
  Method: set_channel_gains(self, gains, channel_ids=None)
    Docstring:
      None
  Method: set_channel_groups(self, groups, channel_ids=None)
    Docstring:
      None
  Method: set_channel_locations(self, locations, channel_ids=None)
    Docstring:
      None
  Method: set_channel_offsets(self, offsets, channel_ids=None)
    Docstring:
      None
  Method: set_dummy_probe_from_locations(self, locations, shape='circle', shape_params={'radius': 1}, axes='xy')
    Docstring:
      Sets a "dummy" probe based on locations.
      
      Parameters
      ----------
      locations : np.array
          Array with channel locations (num_channels, ndim) [ndim can be 2 or 3]
      shape : str, default: default: "circle"
          Electrode shapes
      shape_params : dict, default: {"radius": 1}
          Shape parameters
      axes : "xy" | "yz" | "xz", default: "xy"
          If ndim is 3, indicates the axes that define the plane of the electrodes
  Method: set_probe(self, probe, group_mode='by_probe', in_place=False)
    Docstring:
      Attach a list of Probe object to a recording.
      
      Parameters
      ----------
      probe_or_probegroup: Probe, list of Probe, or ProbeGroup
          The probe(s) to be attached to the recording
      group_mode: "by_probe" | "by_shank", default: "by_probe
          "by_probe" or "by_shank". Adds grouping property to the recording based on the probes ("by_probe")
          or  shanks ("by_shanks")
      in_place: bool
          False by default.
          Useful internally when extractor do self.set_probegroup(probe)
      
      Returns
      -------
      sub_recording: BaseRecording
          A view of the recording (ChannelSlice or clone or itself)
  Method: set_probegroup(self, probegroup, group_mode='by_probe', in_place=False)
    Docstring:
      None
  Method: set_probes(self, probe_or_probegroup, group_mode='by_probe', in_place=False)
    Docstring:
      None
  Method: split_by(self, property='group', outputs='dict')
    Docstring:
      Splits object based on a certain property (e.g. "group")
      
      Parameters
      ----------
      property : str, default: "group"
          The property to use to split the object, default: "group"
      outputs : "dict" | "list", default: "dict"
          Whether to return a dict or a list
      
      Returns
      -------
      dict or list
          A dict or list with grouped objects based on property
      
      Raises
      ------
      ValueError
          Raised when property is not present

Class: BaseSnippets
  Docstring:
    Abstract class representing several multichannel snippets.
  __init__(self, sampling_frequency: 'float', nbefore: 'Union[int, None]', snippet_len: 'int', channel_ids: 'list', dtype)
  Method: add_snippets_segment(self, snippets_segment)
    Docstring:
      None
  Method: get_frames(self, indices=None, segment_index: 'Union[int, None]' = None)
    Docstring:
      None
  Method: get_num_segments(self)
    Docstring:
      None
  Method: get_num_snippets(self, segment_index=None)
    Docstring:
      None
  Method: get_snippets(self, indices=None, segment_index: 'Union[int, None]' = None, channel_ids: 'Union[list, None]' = None, return_scaled=False)
    Docstring:
      None
  Method: get_snippets_from_frames(self, segment_index: 'Union[int, None]' = None, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None, channel_ids: 'Union[list, None]' = None, return_scaled=False)
    Docstring:
      None
  Method: get_times(self)
    Docstring:
      None
  Method: get_total_snippets(self)
    Docstring:
      None
  Method: has_scaled_snippets(self)
    Docstring:
      None
  Method: is_aligned(self)
    Docstring:
      None
  Method: select_channels(self, channel_ids: 'list | np.array | tuple') -> "'BaseSnippets'"
    Docstring:
      Returns a new object with sliced channels.
      
      Parameters
      ----------
      channel_ids : np.array or list
          The list of channels to keep
      
      Returns
      -------
      BaseRecordingSnippets
          The object with sliced channels

Class: BaseSnippetsSegment
  Docstring:
    Abstract class representing multichannel snippets
  __init__(self)
  Method: frames_to_indices(self, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None)
    Docstring:
      Return the slice of snippets
      
      Parameters
      ----------
      start_frame : Union[int, None], default: None
          start sample index, or zero if None
      end_frame : Union[int, None], default: None
          end_sample, or number of samples if None
      
      Returns
      -------
      snippets : slice
          slice of selected snippets
  Method: get_frames(self, indices)
    Docstring:
      Returns the frames of the snippets in this  segment
      
      Returns:
          SampleIndex : Number of samples in the  segment
  Method: get_num_snippets(self)
    Docstring:
      Returns the number of snippets in this segment
      
      Returns:
          SampleIndex : Number of snippets in the segment
  Method: get_snippets(self, indices, channel_indices: 'Union[list, None]' = None) -> 'np.ndarray'
    Docstring:
      Return the snippets, optionally for a subset of samples and/or channels
      
      Parameters
      ----------
      indices : list[int]
          Indices of the snippets to return
      channel_indices : Union[list, None], default: None
          Indices of channels to return, or all channels if None
      
      Returns
      -------
      snippets : np.ndarray
          Array of snippets, num_snippets x num_samples x num_channels

Class: BaseSorting
  Docstring:
    Abstract class representing several segment several units and relative spiketrains.
  __init__(self, sampling_frequency: 'float', unit_ids: 'list')
  Method: add_sorting_segment(self, sorting_segment)
    Docstring:
      None
  Method: count_num_spikes_per_unit(self, outputs='dict')
    Docstring:
      For each unit : get number of spikes  across segments.
      
      Parameters
      ----------
      outputs : "dict" | "array", default: "dict"
          Control the type of the returned object : a dict (keys are unit_ids) or an numpy array.
      
      Returns
      -------
      dict or numpy.array
          Dict : Dictionary with unit_ids as key and number of spikes as values
          Numpy array : array of size len(unit_ids) in the same order as unit_ids.
  Method: count_total_num_spikes(self) -> 'int'
    Docstring:
      Get total number of spikes in the sorting.
      
      This is the sum of all spikes in all segments across all units.
      
      Returns
      -------
      total_num_spikes : int
          The total number of spike
  Method: frame_slice(self, start_frame, end_frame, check_spike_frames=True)
    Docstring:
      None
  Method: get_empty_unit_ids(self) -> 'np.ndarray'
    Docstring:
      Return the unit IDs that have zero spikes across all segments.
      
      This method returns the complement of `get_non_empty_unit_ids` with respect
      to all unit IDs in the sorting.
      
      Returns
      -------
      np.ndarray
          Array of unit IDs (same dtype as self.unit_ids) for which no spikes exist.
  Method: get_non_empty_unit_ids(self) -> 'np.ndarray'
    Docstring:
      Return the unit IDs that have at least one spike across all segments.
      
      This method computes the number of spikes for each unit using
      `count_num_spikes_per_unit` and filters out units with zero spikes.
      
      Returns
      -------
      np.ndarray
          Array of unit IDs (same dtype as self.unit_ids) for which at least one spike exists.
  Method: get_num_samples(self, segment_index=None) -> 'int'
    Docstring:
      Returns the number of samples of the associated recording for a segment.
      
      Parameters
      ----------
      segment_index : int or None, default: None
          The segment index to retrieve the number of samples for.
          For multi-segment objects, it is required
      
      Returns
      -------
      int
          The number of samples
  Method: get_num_segments(self) -> 'int'
    Docstring:
      None
  Method: get_num_units(self) -> 'int'
    Docstring:
      None
  Method: get_sampling_frequency(self) -> 'float'
    Docstring:
      None
  Method: get_times(self, segment_index=None)
    Docstring:
      Get time vector for a registered recording segment.
      
      If a recording is registered:
          * if the segment has a time_vector, then it is returned
          * if not, a time_vector is constructed on the fly with sampling frequency
      
      If there is no registered recording it returns None
  Method: get_total_duration(self) -> 'float'
    Docstring:
      Returns the total duration in s of the associated recording.
      
      Returns
      -------
      float
          The duration in seconds
  Method: get_total_samples(self) -> 'int'
    Docstring:
      Returns the total number of samples of the associated recording.
      
      Returns
      -------
      int
          The total number of samples
  Method: get_unit_ids(self) -> 'list'
    Docstring:
      None
  Method: get_unit_property(self, unit_id, key)
    Docstring:
      None
  Method: get_unit_spike_train(self, unit_id: 'str | int', segment_index: 'Union[int, None]' = None, start_frame: 'Union[int, None]' = None, end_frame: 'Union[int, None]' = None, return_times: 'bool' = False, use_cache: 'bool' = True)
    Docstring:
      None
  Method: has_recording(self) -> 'bool'
    Docstring:
      None
  Method: has_time_vector(self, segment_index=None) -> 'bool'
    Docstring:
      Check if the segment of the registered recording has a time vector.
  Method: precompute_spike_trains(self, from_spike_vector=None)
    Docstring:
      Pre-computes and caches all spike trains for this sorting
      
      Parameters
      ----------
      from_spike_vector : None | bool, default: None
          If None, then it is automatic depending on whether the spike vector is cached.
          If True, will compute it from the spike vector.
          If False, will call `get_unit_spike_train` for each segment for each unit.
  Method: register_recording(self, recording, check_spike_frames=True)
    Docstring:
      Register a recording to the sorting. If the sorting and recording both contain
      time information, the recording’s time information will be used.
      
      Parameters
      ----------
      recording : BaseRecording
          Recording with the same number of segments as current sorting.
          Assigned to self._recording.
      check_spike_frames : bool, default: True
          If True, assert for each segment that all spikes are within the recording's range.
  Method: remove_empty_units(self)
    Docstring:
      Returns a new sorting object which contains only units with at least one spike.
      For multi-segments, a unit is considered empty if it contains no spikes in all segments.
      
      Returns
      -------
      BaseSorting
          Sorting object with non-empty units
  Method: remove_units(self, remove_unit_ids) -> 'BaseSorting'
    Docstring:
      Returns a new sorting object with contains only a selected subset of units.
      
      Parameters
      ----------
      remove_unit_ids :  numpy.array or list
          List of unit ids to remove
      
      Returns
      -------
      BaseSorting
          Sorting without the removed units
  Method: rename_units(self, new_unit_ids: 'np.ndarray | list') -> 'BaseSorting'
    Docstring:
      Returns a new sorting object with renamed units.
      
      
      Parameters
      ----------
      new_unit_ids : numpy.array or list
          List of new names for unit ids.
          They should map positionally to the existing unit ids.
      
      Returns
      -------
      BaseSorting
          Sorting object with renamed units
  Method: select_units(self, unit_ids, renamed_unit_ids=None) -> 'BaseSorting'
    Docstring:
      Returns a new sorting object which contains only a selected subset of units.
      
      
      Parameters
      ----------
      unit_ids : numpy.array or list
          List of unit ids to keep
      renamed_unit_ids : numpy.array or list, default: None
          If given, the kept unit ids are renamed
      
      Returns
      -------
      BaseSorting
          Sorting object with selected units
  Method: set_sorting_info(self, recording_dict, params_dict, log_dict)
    Docstring:
      None
  Method: time_slice(self, start_time: 'float | None', end_time: 'float | None') -> 'BaseSorting'
    Docstring:
      Returns a new sorting object, restricted to the time interval [start_time, end_time].
      
      Parameters
      ----------
      start_time : float | None, default: None
          The start time in seconds. If not provided it is set to 0.
      end_time : float | None, default: None
          The end time in seconds. If not provided it is set to the total duration.
      
      Returns
      -------
      BaseSorting
          A new sorting object with only samples between start_time and end_time
  Method: time_to_sample_index(self, time, segment_index=0)
    Docstring:
      Transform time in seconds into sample index
  Method: to_multiprocessing(self, n_jobs)
    Docstring:
      When necessary turn sorting object into:
      * NumpySorting when n_jobs=1
      * SharedMemorySorting when n_jobs>1
      
      If the sorting is already NumpySorting, SharedMemorySorting or NumpyFolderSorting
      then this return the sortign itself, no transformation so.
      
      Parameters
      ----------
      n_jobs : int
          The number of jobs.
      Returns
      -------
      sharable_sorting:
          A sorting that can be used for multiprocessing.
  Method: to_numpy_sorting(self, propagate_cache=True)
    Docstring:
      Turn any sorting in a NumpySorting.
      useful to have it in memory with a unique vector representation.
      
      Parameters
      ----------
      propagate_cache : bool
          Propagate the cache of indivudual spike trains.
  Method: to_shared_memory_sorting(self)
    Docstring:
      Turn any sorting in a SharedMemorySorting.
      Usefull to have it in memory with a unique vector representation and sharable across processes.
  Method: to_spike_vector(self, concatenated=True, extremum_channel_inds=None, use_cache=True) -> 'np.ndarray | list[np.ndarray]'
    Docstring:
      Construct a unique structured numpy vector concatenating all spikes
      with several fields: sample_index, unit_index, segment_index.
      
      
      Parameters
      ----------
      concatenated : bool, default: True
          With concatenated=True the output is one numpy "spike vector" with spikes from all segments.
          With concatenated=False the output is a list "spike vector" by segment.
      extremum_channel_inds : None or dict, default: None
          If a dictionnary of unit_id to channel_ind is given then an extra field "channel_index".
          This can be convinient for computing spikes postion after sorter.
          This dict can be computed with `get_template_extremum_channel(we, outputs="index")`
      use_cache : bool, default: True
          When True the spikes vector is cached as an attribute of the object (`_cached_spike_vector`).
          This caching only occurs when extremum_channel_inds=None.
      
      Returns
      -------
      spikes : np.array
          Structured numpy array ("sample_index", "unit_index", "segment_index") with all spikes
          Or ("sample_index", "unit_index", "segment_index", "channel_index") if extremum_channel_inds
          is given

Class: BaseSortingSegment
  Docstring:
    Abstract class representing several units and relative spiketrain inside a segment.
  __init__(self, t_start=None)
  Method: get_unit_spike_train(self, unit_id, start_frame: 'Optional[int]' = None, end_frame: 'Optional[int]' = None) -> 'np.ndarray'
    Docstring:
      Get the spike train for a unit.
      
      Parameters
      ----------
      unit_id
      start_frame : int, default: None
      end_frame : int, default: None
      
      Returns
      -------
      np.ndarray

Class: BinaryFolderRecording
  Docstring:
    BinaryFolderRecording is an internal format used in spikeinterface.
    It is a BinaryRecordingExtractor + metadata contained in a folder.
    
    It is created with the function: `recording.save(format="binary", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    recording : BinaryFolderRecording
        The recording
  __init__(self, folder_path)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary

Class: BinaryRecordingExtractor
  Docstring:
    RecordingExtractor for a binary format
    
    Parameters
    ----------
    file_paths : str or Path or list
        Path to the binary file
    sampling_frequency : float
        The sampling frequency
    num_channels : int
        Number of channels
    dtype : str or dtype
        The dtype of the binary file
    time_axis : int, default: 0
        The axis of the time dimension
    t_starts : None or list of float, default: None
        Times in seconds of the first sample for each segment
    channel_ids : list, default: None
        A list of channel ids
    file_offset : int, default: 0
        Number of bytes in the file to offset by during memmap instantiation.
    gain_to_uV : float or array-like, default: None
        The gain to apply to the traces
    offset_to_uV : float or array-like, default: None
        The offset to apply to the traces
    is_filtered : bool or None, default: None
        If True, the recording is assumed to be filtered. If None, is_filtered is not set.
    
    Notes
    -----
    When both num_channels and num_chan are provided, `num_channels` is used and `num_chan` is ignored.
    
    Returns
    -------
    recording : BinaryRecordingExtractor
        The recording Extractor
  __init__(self, file_paths, sampling_frequency, dtype, num_channels: 'int', t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: write_recording(recording, file_paths, dtype=None, **job_kwargs)
    Docstring:
      Save the traces of a recording extractor in binary .dat format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor object to be saved in .dat format
      file_paths : str
          The path to the file.
      dtype : dtype, default: None
          Type of the saved data
      **job_kwargs : keyword arguments for parallel processing:
          * chunk_duration or chunk_size or chunk_memory or total_memory
              - chunk_size : int
                  Number of samples per chunk
              - chunk_memory : str
                  Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
              - total_memory : str
                  Total memory usage (e.g. "500M", "2G")
              - chunk_duration : str or float or None
                  Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
          * n_jobs : int | float
              Number of jobs to use. With -1 the number of jobs is the same as number of cores.
              Using a float between 0 and 1 will use that fraction of the total cores.
          * progress_bar : bool
              If True, a progress bar is printed
          * mp_context : "fork" | "spawn" | None, default: None
              Context for multiprocessing. It can be None, "fork" or "spawn".
              Note that "fork" is only safely available on LINUX systems

Class: ChannelSliceRecording
  Docstring:
    Class to slice a Recording object based on channel_ids.
    
    Not intending to be used directly, use methods of `BaseRecording` such as `recording.select_channels`.
  __init__(self, parent_recording, channel_ids=None, renamed_channel_ids=None)

Class: ChannelSliceSnippets
  Docstring:
    Class to slice a Snippets object based on channel_ids.
    
    Do not use this class directly but use `snippets.channel_slice(...)`
  __init__(self, parent_snippets, channel_ids=None, renamed_channel_ids=None)

Class: ChannelSparsity
  Docstring:
    Handle channel sparsity for a set of units. That is, for every unit,
    it indicates which channels are used to represent the waveform and the rest
    of the non-represented channels are assumed to be zero.
    
    Internally, sparsity is stored as a boolean mask.
    
    The ChannelSparsity object can also provide other sparsity representations:
    
        * ChannelSparsity.unit_id_to_channel_ids : unit_id to channel_ids
        * ChannelSparsity.unit_id_to_channel_indices : unit_id channel_inds
    
    By default it is constructed with a boolean array:
    
    >>> sparsity = ChannelSparsity(mask, unit_ids, channel_ids)
    
    But can be also constructed from a dictionary:
    
    >>> sparsity = ChannelSparsity.from_unit_id_to_channel_ids(unit_id_to_channel_ids, unit_ids, channel_ids)
    
    Parameters
    ----------
    mask : np.array of bool
        The sparsity mask (num_units, num_channels)
    unit_ids : list or array
        Unit ids vector or list
    channel_ids : list or array
        Channel ids vector or list
    
    Examples
    --------
    
    The class can also be used to construct/estimate the sparsity from a SortingAnalyzer or a Templates
    with several methods:
    
    Using the N best channels (largest template amplitude):
    
    >>> sparsity = ChannelSparsity.from_best_channels(sorting_analyzer, num_channels, peak_sign="neg")
    
    Using a neighborhood by radius:
    
    >>> sparsity = ChannelSparsity.from_radius(sorting_analyzer, radius_um, peak_sign="neg")
    
    Using a SNR threshold:
    >>> sparsity = ChannelSparsity.from_snr(sorting_analyzer, threshold, peak_sign="neg")
    
    Using a template energy threshold:
    >>> sparsity = ChannelSparsity.from_energy(sorting_analyzer, threshold)
    
    Using a recording/sorting property (e.g. "group"):
    
    >>> sparsity = ChannelSparsity.from_property(sorting_analyzer, by_property="group")
  __init__(self, mask, unit_ids, channel_ids)
  Method: are_waveforms_dense(self, waveforms: 'np.ndarray') -> 'bool'
    Docstring:
      None
  Method: are_waveforms_sparse(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'bool'
    Docstring:
      None
  Method: densify_waveforms(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'np.ndarray'
    Docstring:
      Densify sparse waveforms that were sparisified according to a unit's channel sparsity.
      
      Given a unit_id its sparsified waveform, this method places the waveform back
      into its original form within a dense array.
      
      Parameters
      ----------
      waveforms : np.array
          The sparsified waveforms array of shape (num_waveforms, num_samples, num_active_channels) or a single
          sparse waveform (template) with shape (num_samples, num_active_channels).
      unit_id : str
          The unit_id that was used to sparsify the waveform.
      
      Returns
      -------
      densified_waveforms : np.array
          The densified waveforms array of shape (num_waveforms, num_samples, num_channels) or a single dense
          waveform (template) with shape (num_samples, num_channels).
  Method: sparsify_templates(self, templates_array: 'np.ndarray') -> 'np.ndarray'
    Docstring:
      None
  Method: sparsify_waveforms(self, waveforms: 'np.ndarray', unit_id: 'str | int') -> 'np.ndarray'
    Docstring:
      Sparsify the waveforms according to a unit_id corresponding sparsity.
      
      
      Given a unit_id, this method selects only the active channels for
      that unit and removes the rest.
      
      Parameters
      ----------
      waveforms : np.array
          Dense waveforms with shape (num_waveforms, num_samples, num_channels) or a
          single dense waveform (template) with shape (num_samples, num_channels).
      unit_id : str
          The unit_id for which to sparsify the waveform.
      
      Returns
      -------
      sparsified_waveforms : np.array
          Sparse waveforms with shape (num_waveforms, num_samples, num_active_channels)
          or a single sparsified waveform (template) with shape (num_samples, num_active_channels).
  Method: to_dict(self)
    Docstring:
      Return a serializable dict.

Class: ChannelsAggregationRecording
  Docstring:
    Class that handles aggregating channels from different recordings, e.g. from different channel groups.
    
    Do not use this class directly but use `si.aggregate_channels(...)`
  __init__(self, recording_list_or_dict=None, renamed_channel_ids=None, recording_list=None)

Class: ChunkRecordingExecutor
  Docstring:
    Core class for parallel processing to run a "function" over chunks on a recording.
    
    It supports running a function:
        * in loop with chunk processing (low RAM usage)
        * at once if chunk_size is None (high RAM usage)
        * in parallel with ProcessPoolExecutor (higher speed)
    
    The initializer ("init_func") allows to set a global context to avoid heavy serialization
    (for examples, see implementation in `core.waveform_tools`).
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording to be processed
    func : function
        Function that runs on each chunk
    init_func : function
        Initializer function to set the global context (accessible by "func")
    init_args : tuple
        Arguments for init_func
    verbose : bool
        If True, output is verbose
    job_name : str, default: ""
        Job name
    progress_bar : bool, default: False
        If True, a progress bar is printed to monitor the progress of the process
    handle_returns : bool, default: False
        If True, the function can return values
    gather_func : None or callable, default: None
        Optional function that is called in the main thread and retrieves the results of each worker.
        This function can be used instead of `handle_returns` to implement custom storage on-the-fly.
    pool_engine : "process" | "thread", default: "thread"
        If n_jobs>1 then use ProcessPoolExecutor or ThreadPoolExecutor
    n_jobs : int, default: 1
        Number of jobs to be used. Use -1 to use as many jobs as number of cores
    total_memory : str, default: None
        Total memory (RAM) to use (e.g. "1G", "500M")
    chunk_memory : str, default: None
        Memory per chunk (RAM) to use (e.g. "1G", "500M")
    chunk_size : int or None, default: None
        Size of each chunk in number of samples. If "total_memory" or "chunk_memory" are used, it is ignored.
    chunk_duration : str or float or None
        Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
    mp_context : "fork" | "spawn" | None, default: None
        "fork" or "spawn". If None, the context is taken by the recording.get_preferred_mp_context().
        "fork" is only safely available on LINUX systems.
    max_threads_per_worker : int or None, default: None
        Limit the number of thread per process using threadpoolctl modules.
        This used only when n_jobs>1
        If None, no limits.
    need_worker_index : bool, default False
        If True then each worker will also have a "worker_index" injected in the local worker dict.
    
    Returns
    -------
    res : list
        If "handle_returns" is True, the results for each chunk process
  __init__(self, recording, func, init_func, init_args, verbose=False, progress_bar=False, handle_returns=False, gather_func=None, pool_engine='thread', n_jobs=1, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, mp_context=None, job_name='', max_threads_per_worker=1, need_worker_index=False)
  Method: run(self, recording_slices=None)
    Docstring:
      Runs the defined jobs.

Class: ComputeNoiseLevels
  Docstring:
    Computes the noise level associated with each recording channel.
    
    This function will wraps the `get_noise_levels(recording)` to make the noise levels persistent
    on disk (folder or zarr) as a `WaveformExtension`.
    The noise levels do not depend on the unit list, only the recording, but it is a convenient way to
    retrieve the noise levels directly ine the WaveformExtractor.
    
    Note that the noise levels can be scaled or not, depending on the `return_scaled` parameter
    of the `SortingAnalyzer`.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    **kwargs : dict
        Additional parameters for the `spikeinterface.get_noise_levels()` function
    
    Returns
    -------
    noise_levels : np.array
        The noise level vector
  __init__(self, sorting_analyzer)

Class: ComputeRandomSpikes
  Docstring:
    AnalyzerExtension that select somes random spikes.
    This allows for a subsampling of spikes for further calculations and is important
    for managing that amount of memory and speed of computation in the analyzer.
    
    This will be used by the `waveforms`/`templates` extensions.
    
    This internally uses `random_spikes_selection()` parameters.
    
    Parameters
    ----------
    method : "uniform" | "all", default: "uniform"
        The method to select the spikes
    max_spikes_per_unit : int, default: 500
        The maximum number of spikes per unit, ignored if method="all"
    margin_size : int, default: None
        A margin on each border of segments to avoid border spikes, ignored if method="all"
    seed : int or None, default: None
        A seed for the random generator, ignored if method="all"
    
    Returns
    -------
    random_spike_indices: np.array
        The indices of the selected spikes
  __init__(self, sorting_analyzer)
  Method: get_random_spikes(self)
    Docstring:
      None
  Method: get_selected_indices_in_spike_train(self, unit_id, segment_index)
    Docstring:
      None

Class: ComputeTemplates
  Docstring:
    AnalyzerExtension that computes templates (average, std, median, percentile, ...)
    
    This depends on the "waveforms" extension (`SortingAnalyzer.compute("waveforms")`)
    
    When the "waveforms" extension is already computed, then the recording is not needed anymore for this extension.
    
    Note: by default only the average and std are computed. Other operators (std, median, percentile) can be computed on demand
    after the SortingAnalyzer.compute("templates") and then the data dict is updated on demand.
    
    Parameters
    ----------
    operators: list[str] | list[(str, float)] (for percentile)
        The operators to compute. Can be "average", "std", "median", "percentile"
        If percentile is used, then the second element of the tuple is the percentile to compute.
    
    Returns
    -------
    templates: np.ndarray
        The computed templates with shape (num_units, num_samples, num_channels)
  __init__(self, sorting_analyzer)
  Method: get_templates(self, unit_ids=None, operator='average', percentile=None, save=True, outputs='numpy')
    Docstring:
      Return templates (average, std, median or percentiles) for multiple units.
      
      If not computed yet then this is computed on demand and optionally saved.
      
      Parameters
      ----------
      unit_ids : list or None
          Unit ids to retrieve waveforms for
      operator : "average" | "median" | "std" | "percentile", default: "average"
          The operator to compute the templates
      percentile : float, default: None
          Percentile to use for operator="percentile"
      save : bool, default: True
          In case, the operator is not computed yet it can be saved to folder or zarr
      outputs : "numpy" | "Templates", default: "numpy"
          Whether to return a numpy array or a Templates object
      
      Returns
      -------
      templates : np.array | Templates
          The returned templates (num_units, num_samples, num_channels)
  Method: get_unit_template(self, unit_id, operator='average')
    Docstring:
      Return template for a single unit.
      
      Parameters
      ----------
      unit_id: str | int
          Unit id to retrieve waveforms for
      operator: str, default: "average"
           The operator to compute the templates
      
      Returns
      -------
      template: np.array
          The returned template (num_samples, num_channels)

Class: ComputeWaveforms
  Docstring:
    AnalyzerExtension that extract some waveforms of each units.
    
    The sparsity is controlled by the SortingAnalyzer sparsity.
    
    Parameters
    ----------
    ms_before : float, default: 1.0
        The number of ms to extract before the spike events
    ms_after : float, default: 2.0
        The number of ms to extract after the spike events
    dtype : None | dtype, default: None
        The dtype of the waveforms. If None, the dtype of the recording is used.
    
    Returns
    -------
    waveforms : np.ndarray
        Array with computed waveforms with shape (num_random_spikes, num_samples, num_channels)
  __init__(self, sorting_analyzer)
  Method: get_waveforms_one_unit(self, unit_id, force_dense: bool = False)
    Docstring:
      Returns the waveforms of a unit id.
      
      Parameters
      ----------
      unit_id : int or str
          The unit id to return waveforms for
      force_dense : bool, default: False
          If True, and SortingAnalyzer must be sparse then only waveforms on sparse channels are returned.
      
      Returns
      -------
      waveforms: np.array
          The waveforms (num_waveforms, num_samples, num_channels).
          In case sparsity is used, only the waveforms on sparse channels are returned.

Class: ConcatenateSegmentRecording
  Docstring:
    Return a recording that "concatenates" all segments from all parent recordings
    into one recording with a single segment. The operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    ignore_times: bool, default: True
        If True, time information (t_start, time_vector) is ignored when concatenating recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Class: ConcatenateSegmentSorting
  Docstring:
    Return a sorting that "concatenates" all segments from all sorting
    into one sorting with a single segment. This operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments. The returned spike times (originating
    from each segment) are returned as relative to the start of the concatenated segment.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings. If `total_samples_list` is not provided, all
        sortings should have an assigned recording.  Otherwise, all sortings
        should be monosegments.
    total_samples_list : list[int] or None, default: None
        If the sortings have no assigned recording, the total number of samples
        of each of the concatenated (monosegment) sortings is pulled from this
        list.
    ignore_times : bool, default: True
        If True, time information (t_start, time_vector) is ignored
        when concatenating the sortings' assigned recordings.
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, total_samples_list=None, ignore_times=True, sampling_frequency_max_diff=0)
  Method: get_num_samples(self, segment_index=None)
    Docstring:
      Overrides the BaseSorting method, which requires a recording.

Class: FrameSliceRecording
  Docstring:
    Class to get a lazy frame slice.
    Work only with mono segment recording.
    
    Do not use this class directly but use `recording.frame_slice(...)`
    
    Parameters
    ----------
    parent_recording: BaseRecording
    start_frame: None or int, default: None
        Earliest included frame in the parent recording.
        Times are re-referenced to start_frame in the
        sliced object. Set to 0 by default.
    end_frame: None or int, default: None
        Latest frame in the parent recording. As for usual
        python slicing, the end frame is excluded.
        Set to the recording's total number of samples by
        default
  __init__(self, parent_recording, start_frame=None, end_frame=None)

Class: FrameSliceSorting
  Docstring:
    Class to get a lazy frame slice.
    Work only with mono segment sorting.
    
    Do not use this class directly but use `sorting.frame_slice(...)`
    
    When a recording is registered for the parent sorting,
    a corresponding sliced recording is registered to the sliced sorting.
    
    Note that the returned sliced sorting may be empty.
    
    Parameters
    ----------
    parent_sorting: BaseSorting
    start_frame: None or int, default: None
        Earliest included frame in the parent sorting(/recording).
        Spike times(/traces) are re-referenced to start_frame in the
        sliced objects. Set to 0 if None.
    end_frame: None or int, default: None
        Latest frame in the parent sorting(/recording). As for usual
        python slicing, the end frame is excluded (such that the max
        spike frame in the sliced sorting is `end_frame - start_frame - 1`)
        If None, the end_frame is either:
            - The total number of samples, if a recording is assigned
            - The maximum spike frame + 1, if no recording is assigned
  __init__(self, parent_sorting, start_frame=None, end_frame=None, check_spike_frames=True)

Class: InjectTemplatesRecording
  Docstring:
    Class for creating a recording based on spike timings and templates.
    Can be just the templates or can add to an already existing recording.
    
    Parameters
    ----------
    sorting : BaseSorting
        Sorting object containing all the units and their spike train.
    templates : np.ndarray[n_units, n_samples, n_channels] | np.ndarray[n_units, n_samples, n_oversampling]
        Array containing the templates to inject for all the units.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce sampling jitter.
    nbefore : list[int] | int | None, default: None
        The number of samples before the peak of the template to align the spike.
        If None, will default to the highest peak.
    amplitude_factor : list[float] | float | None, default: None
        The amplitude of each spike for each unit.
        Can be None (no scaling).
        Can be scalar all spikes have the same factor (certainly useless).
        Can be a vector with same shape of spike_vector of the sorting.
    parent_recording : BaseRecording | None, default: None
        The recording over which to add the templates.
        If None, will default to traces containing all 0.
    num_samples : list[int] | int | None, default: None
        The number of samples in the recording per segment.
        You can use int for mono-segment objects.
    upsample_vector : np.array | None, default: None.
        When templates is 4d we can simulate a jitter.
        Optional the upsample_vector is the jitter index with a number per spike in range 0-templates.shape[3].
    check_borders : bool, default: False
        Checks if the border of the templates are zero.
    
    Returns
    -------
    injected_recording: InjectTemplatesRecording
        The recording with the templates injected.
  __init__(self, sorting: 'BaseSorting', templates: 'np.ndarray', nbefore: 'list[int] | int | None' = None, amplitude_factor: 'list[float] | float | None' = None, parent_recording: 'BaseRecording | None' = None, num_samples: 'list[int] | int | None' = None, upsample_vector: 'np.array | None' = None, check_borders: 'bool' = False) -> 'None'

Class: Motion
  Docstring:
    Motion of the tissue relative the probe.
    
    Parameters
    ----------
    displacement : numpy array 2d or list of
        Motion estimate in um.
        List is the number of segment.
        For each semgent :
    
            * shape (temporal bins, spatial bins)
            * motion.shape[0] = temporal_bins.shape[0]
            * motion.shape[1] = 1 (rigid) or spatial_bins.shape[1] (non rigid)
    temporal_bins_s : numpy.array 1d or list of
        temporal bins (bin center)
    spatial_bins_um : numpy.array 1d
        Windows center.
        spatial_bins_um.shape[0] == displacement.shape[1]
        If rigid then spatial_bins_um.shape[0] == 1
    direction : str, default: 'y'
        Direction of the motion.
    interpolation_method : str
        How to determine the displacement between bin centers? See the docs
        for scipy.interpolate.RegularGridInterpolator for options.
  __init__(self, displacement, temporal_bins_s, spatial_bins_um, direction='y', interpolation_method='linear')
  Method: check_properties(self)
    Docstring:
      None
  Method: copy(self)
    Docstring:
      None
  Method: from_dict(d)
    Docstring:
      None
  Method: get_boundaries(self)
    Docstring:
      None
  Method: get_displacement_at_time_and_depth(self, times_s, locations_um, segment_index=None, grid=False)
    Docstring:
      Evaluate the motion estimate at times and positions
      
      Evaluate the motion estimate, returning the (linearly interpolated) estimated displacement
      at the given times and locations.
      
      Parameters
      ----------
      times_s: np.array
      locations_um: np.array
          Either this is a one-dimensional array (a vector of positions along self.dimension), or
          else a 2d array with the 2 or 3 spatial dimensions indexed along axis=1.
      segment_index: int, default: None
          The index of the segment to evaluate. If None, and there is only one segment, then that segment is used.
      grid : bool, default: False
          If grid=False, the default, then times_s and locations_um should have the same one-dimensional
          shape, and the returned displacement[i] is the displacement at time times_s[i] and location
          locations_um[i].
          If grid=True, times_s and locations_um determine a grid of positions to evaluate the displacement.
          Then the returned displacement[i,j] is the displacement at depth locations_um[i] and time times_s[j].
      
      Returns
      -------
      displacement : np.array
          A displacement per input location, of shape times_s.shape if grid=False and (locations_um.size, times_s.size)
          if grid=True.
  Method: make_interpolators(self)
    Docstring:
      None
  Method: save(self, folder)
    Docstring:
      None
  Method: to_dict(self)
    Docstring:
      None

Class: NoiseGeneratorRecording
  Docstring:
    A lazy recording that generates white noise samples if and only if `get_traces` is called.
    
    This done by tiling small noise chunk.
    
    2 strategies to be reproducible across different start/end frame calls:
      * "tile_pregenerated": pregenerate a small noise block and tile it depending the start_frame/end_frame
      * "on_the_fly": generate on the fly small noise chunk and tile then. seed depend also on the noise block.
    
    
    Parameters
    ----------
    num_channels : int
        The number of channels.
    sampling_frequency : float
        The sampling frequency of the recorder.
    durations : list[float]
        The durations of each segment in seconds. Note that the length of this list is the number of segments.
    noise_levels : float | np.array, default: 1.0
        Std of the white noise (if an array, defined by per channels)
    cov_matrix : np.array | None, default: None
        The covariance matrix of the noise
    dtype : np.dtype | str | None, default: "float32"
        The dtype of the recording. Note that only np.float32 and np.float64 are supported.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and cusume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computaion (random)
    noise_block_size : int, default: 30000
        Size in sample of noise block.
    
    Notes
    -----
    If modifying this function, ensure that only one call to malloc is made per call get_traces to
    maintain the optimized memory profile.
  __init__(self, num_channels: 'int', sampling_frequency: 'float', durations: 'list[float]', noise_levels: 'float | np.array' = 1.0, cov_matrix: 'np.array | None' = None, dtype: 'np.dtype | str | None' = 'float32', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated', noise_block_size: 'int' = 30000)

Class: NpyFolderSnippets
  Docstring:
    NpyFolderSnippets is an internal format used in spikeinterface.
    It is a NpySnippetsExtractor + metadata contained in a folder.
    
    It is created with the function: `snippets.save(format="npy", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
        The path to the folder
    
    Returns
    -------
    snippets : NpyFolderSnippets
        The snippets
  __init__(self, folder_path)

Class: NpySnippetsExtractor
  Docstring:
    Dead simple and super light format based on the NPY numpy format.
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_paths, sampling_frequency, channel_ids=None, nbefore=None, gain_to_uV=None, offset_to_uV=None)
  Method: write_snippets(snippets, file_paths, dtype=None)
    Docstring:
      Save snippet extractor in binary .npy format.
      
      Parameters
      ----------
      snippets: SnippetsExtractor
          The snippets extractor object to be saved in .npy format
      file_paths: str
          The paths to the files.
      dtype: None, str or dtype
          Typecode or data-type to which the snippets will be cast.
      {}

Class: NpzFolderSorting
  Docstring:
    NpzFolderSorting is the old internal format used in spikeinterface (<=0.98.0)
    
    This a folder that contains:
    
      * "sorting_cached.npz" file in the NpzSortingExtractor format
      * "npz.json" which the json description of NpzSortingExtractor
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="npz_folder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    sorting : NpzFolderSorting
        The sorting
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NpzSortingExtractor
  Docstring:
    Dead simple and super light format based on the NPZ numpy format.
    https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NumpyEvent
  Docstring:
    Abstract class representing events.
    
    
    Parameters
    ----------
    channel_ids : list or np.array
        The channel ids
    structured_dtype : dtype or dict
        The dtype of the events. If dict, each key is the channel_id and values must be
        the dtype of the channel (also structured). If dtype, each channel is assigned the
        same dtype.
        In case of structured dtypes, the "time" or "timestamp" field name must be present.
  __init__(self, channel_ids, structured_dtype)
  Method: from_dict(event_dict_list)
    Docstring:
      Constructs NumpyEvent from a dictionary
      
      Parameters
      ----------
      event_dict_list : list
          List of dictionaries with channel_ids as keys and event data as values.
          Each list element corresponds to an event segment.
          If values have a simple dtype, they are considered the timestamps.
          If values have a structured dtype, the have to contain a "times" or "timestamps"
          field.
      
      Returns
      -------
      NumpyEvent
          The Event object

Class: NumpyFolderSorting
  Docstring:
    NumpyFolderSorting is the new internal format used in spikeinterface (>=0.99.0) for caching sorting objects.
    
    It is a simple folder that contains:
      * a file "spike.npy" (numpy format) with all flatten spikes (using sorting.to_spike_vector())
      * a "numpysorting_info.json" containing sampling_frequency, unit_ids and num_segments
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="numpy_folder")`
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: NumpyRecording
  Docstring:
    In memory recording.
    Contrary to previous version this class does not handle npy files.
    
    Parameters
    ----------
    traces_list :  list of array or array (if mono segment)
        The traces to instantiate a mono or multisegment Recording
    sampling_frequency : float
        The sampling frequency in Hz
    t_starts : None or list of float
        Times in seconds of the first sample for each segment
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
  __init__(self, traces_list, sampling_frequency, t_starts=None, channel_ids=None)
  Method: from_recording(source_recording, **job_kwargs)
    Docstring:
      None

Class: NumpySnippets
  Docstring:
    In memory recording.
    Contrary to previous version this class does not handle npy files.
    
    Parameters
    ----------
    snippets_list :  list of array or array (if mono segment)
        The snippets to instantiate a mono or multisegment basesnippet
    spikesframes_list : list of array or array (if mono segment)
        Frame of each snippet
    sampling_frequency : float
        The sampling frequency in Hz
    
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
  __init__(self, snippets_list, spikesframes_list, sampling_frequency, nbefore=None, channel_ids=None)

Class: NumpySorting
  Docstring:
    In memory sorting object.
    The internal representation is always done with a long "spike vector".
    
    
    But we have convenient class methods to instantiate from:
      * other sorting object: `NumpySorting.from_sorting()`
      * from samples+labels: `NumpySorting.from_samples_and_labels()`
      * from times+labels: `NumpySorting.from_times_and_labels()`
      * from dict of list: `NumpySorting.from_unit_dict()`
      * from neo: `NumpySorting.from_neo_spiketrain_list()`
    
    Parameters
    ----------
    spikes :  numpy.array
        A numpy vector, the one given by Sorting.to_spike_vector().
    sampling_frequency : float
        The sampling frequency in Hz
    channel_ids : list
        A list of unit_ids.
  __init__(self, spikes, sampling_frequency, unit_ids)
  Method: from_neo_spiketrain_list(neo_spiketrains, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct a NumpySorting with a neo spiketrain list.
      
      If this is a list of list, it is multi segment.
      
      Parameters
      ----------
  Method: from_peaks(peaks, sampling_frequency, unit_ids) -> "'NumpySorting'"
    Docstring:
      Construct a sorting from peaks returned by 'detect_peaks()' function.
      The unit ids correspond to the recording channel ids and spike trains are the
      detected spikes for each channel.
      
      Parameters
      ----------
      peaks : structured np.array
          Peaks array as returned by the 'detect_peaks()' function
      sampling_frequency : float
          the sampling frequency in Hz
      unit_ids : np.array
          The unit_ids vector which is generally the channel_ids but can be different.
      
      Returns
      -------
      sorting
          The NumpySorting object
  Method: from_samples_and_labels(samples_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting extractor from:
        * an array of spike samples
        * an array of spike labels
      In case of multisegment, it is a list of array.
      
      Parameters
      ----------
      samples_list : list of array (or array)
          An array of spike samples
      labels_list : list of array (or array)
          An array of spike labels corresponding to the given times
      unit_ids : list or None, default: None
          The explicit list of unit_ids that should be extracted from labels_list
          If None, then it will be np.unique(labels_list)
  Method: from_sorting(source_sorting: 'BaseSorting', with_metadata=False, copy_spike_vector=False) -> "'NumpySorting'"
    Docstring:
      Create a numpy sorting from another sorting extractor
  Method: from_times_and_labels(times_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting extractor from:
        * an array of spike times (in s)
        * an array of spike labels
      In case of multisegment, it is a list of array.
      
      Parameters
      ----------
      times_list : list of array (or array)
          An array of spike samples
      labels_list : list of array (or array)
          An array of spike labels corresponding to the given times
      unit_ids : list or None, default: None
          The explicit list of unit_ids that should be extracted from labels_list
          If None, then it will be np.unique(labels_list)
  Method: from_times_labels(times_list, labels_list, sampling_frequency, unit_ids=None) -> "'NumpySorting'"
    Docstring:
      None
  Method: from_unit_dict(units_dict_list, sampling_frequency) -> "'NumpySorting'"
    Docstring:
      Construct NumpySorting from a list of dict.
      The list length is the segment count.
      Each dict have unit_ids as keys and spike times as values.
      
      Parameters
      ----------
      dict_list : list of dict

Class: SelectSegmentRecording
  Docstring:
    Return a new recording with a subset of segments from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, recording: 'BaseRecording', segment_indices: 'int | list[int]')

Class: SelectSegmentSorting
  Docstring:
    Return a new sorting with a single segment from a multi-segment sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The multi-segment sorting
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, sorting: 'BaseSorting', segment_indices: 'int | list[int]')

Class: SharedMemoryRecording
  Docstring:
    In memory recording with shared memmory buffer.
    
    Parameters
    ----------
    shm_names : list
        List of sharedmem names for each segment
    shape_list : list
        List of shape of sharedmem buffer for each segment
        The first dimension is the number of samples, the second is the number of channels.
        Note that the number of channels must be the same for all segments
    sampling_frequency : float
        The sampling frequency in Hz
    t_starts : None or list of float
        Times in seconds of the first sample for each segment
    channel_ids : list
        An optional list of channel_ids. If None, linear channels are assumed
    main_shm_owner : bool, default: True
        If True, the main instance will unlink the sharedmem buffer when deleted
  __init__(self, shm_names, shape_list, dtype, sampling_frequency, channel_ids=None, t_starts=None, main_shm_owner=True)
  Method: from_recording(source_recording, **job_kwargs)
    Docstring:
      None

Class: SharedMemorySorting
  Docstring:
    Abstract class representing several segment several units and relative spiketrains.
  __init__(self, shm_name, shape, sampling_frequency, unit_ids, dtype=[('sample_index', 'int64'), ('unit_index', 'int64'), ('segment_index', 'int64')], main_shm_owner=True)
  Method: from_sorting(source_sorting, with_metadata=False)
    Docstring:
      None

Class: SortingAnalyzer
  Docstring:
    Class to make a pair of Recording-Sorting which will be used used for all post postprocessing,
    visualization and quality metric computation.
    
    This internally maintains a list of computed ResultExtention (waveform, pca, unit position, spike position, ...).
    
    This can live in memory and/or can be be persistent to disk in 2 internal formats (folder/json/npz or zarr).
    A SortingAnalyzer can be transfer to another format using `save_as()`
    
    This handle unit sparsity that can be propagated to ResultExtention.
    
    This handle spike sampling that can be propagated to ResultExtention : works on only a subset of spikes.
    
    This internally saves a copy of the Sorting and extracts main recording attributes (without traces) so
    the SortingAnalyzer object can be reloaded even if references to the original sorting and/or to the original recording
    are lost.
    
    SortingAnalyzer() should not never be used directly for creating: use instead create_sorting_analyzer(sorting, resording, ...)
    or eventually SortingAnalyzer.create(...)
  __init__(self, sorting: 'BaseSorting', recording: 'BaseRecording | None' = None, rec_attributes: 'dict | None' = None, format: 'str | None' = None, sparsity: 'ChannelSparsity | None' = None, return_scaled: 'bool' = True, backend_options: 'dict | None' = None)
  Method: are_units_mergeable(self, merge_unit_groups: 'list[str | int]', merging_mode: 'str' = 'soft', sparsity_overlap: 'float' = 0.75, return_masks: 'bool' = False)
    Docstring:
      Check if soft merges can be performed given sparsity_overlap param.
      
      Parameters
      ----------
      merge_unit_groups : list/tuple of lists/tuples
          A list of lists for every merge group. Each element needs to have at least two elements
          (two units to merge).
      merging_mode : "soft" | "hard", default: "soft"
          How merges are performed. In the "soft" mode, merges will be approximated, with no smart merging
          of the extension data.
      sparsity_overlap : float, default: 0.75
          The percentage of overlap that units should share in order to accept merges.
      return_masks : bool, default: False
          If True, return the masks used for the merge.
      
      Returns
      -------
      mergeable : dict[bool]
          Dictionary of of mergeable units. The keys are the merge unit groups (as tuple), and boolean
          values indicate if the merge is possible.
      masks : dict[np.array]
          Dictionary of masks used for the merge. The keys are the merge unit groups, and the values
          are the masks used for the merge.
  Method: channel_ids_to_indices(self, channel_ids) -> 'np.ndarray'
    Docstring:
      None
  Method: compute(self, input, save=True, extension_params=None, verbose=False, **kwargs) -> "'AnalyzerExtension | None'"
    Docstring:
      Compute one extension or several extensiosn.
      Internally calls compute_one_extension() or compute_several_extensions() depending on the input type.
      
      Parameters
      ----------
      input : str or dict or list
          The extensions to compute, which can be passed as:
          * a string: compute one extension. Additional parameters can be passed as key word arguments.
          * a dict: compute several extensions. The keys are the extension names and the values are dictionaries with the extension parameters.
          * a list: compute several extensions. The list contains the extension names. Additional parameters can be passed with the extension_params
          argument.
      save : bool, default: True
          If True the extension is saved to disk (only if sorting analyzer format is not "memory")
      extension_params : dict or None, default: None
          If input is a list, this parameter can be used to specify parameters for each extension.
          The extension_params keys must be included in the input list.
      **kwargs:
          All other kwargs are transmitted to extension.set_params() (if input is a string) or job_kwargs
      
      Returns
      -------
      extension : SortingAnalyzerExtension | None
          The extension instance if input is a string, None otherwise.
      
      Examples
      --------
      This function accepts the following possible signatures for flexibility:
      
      Compute one extension, with parameters:
      >>> analyzer.compute("waveforms", ms_before=1.5, ms_after=2.5)
      
      Compute two extensions with a list as input and with default parameters:
      >>> analyzer.compute(["random_spikes", "waveforms"])
      
      Compute two extensions with dict as input, one dict per extension
      >>> analyzer.compute({"random_spikes":{}, "waveforms":{"ms_before":1.5, "ms_after", "2.5"}})
      
      Compute two extensions with an input list specifying custom parameters for one
      (the other will use default parameters):
      >>> analyzer.compute(["random_spikes", "waveforms"],extension_params={"waveforms":{"ms_before":1.5, "ms_after": "2.5"}})
  Method: compute_one_extension(self, extension_name, save=True, verbose=False, **kwargs) -> "'AnalyzerExtension'"
    Docstring:
      Compute one extension.
      
      Important note: when computing again an extension, all extensions that depend on it
      will be automatically and silently deleted to keep a coherent data.
      
      Parameters
      ----------
      extension_name : str
          The name of the extension.
          For instance "waveforms", "templates", ...
      save : bool, default: True
          It the extension can be saved then it is saved.
          If not then the extension will only live in memory as long as the object is deleted.
          save=False is convenient to try some parameters without changing an already saved extension.
      
      **kwargs:
          All other kwargs are transmitted to extension.set_params() or job_kwargs
      
      Returns
      -------
      result_extension : AnalyzerExtension
          Return the extension instance
      
      Examples
      --------
      
      >>> Note that the return is the instance extension.
      >>> extension = sorting_analyzer.compute("waveforms", **some_params)
      >>> extension = sorting_analyzer.compute_one_extension("waveforms", **some_params)
      >>> wfs = extension.data["waveforms"]
      >>> # Note this can be be done in the old way style BUT the return is not the same it return directly data
      >>> wfs = compute_waveforms(sorting_analyzer, **some_params)
  Method: compute_several_extensions(self, extensions, save=True, verbose=False, **job_kwargs)
    Docstring:
      Compute several extensions
      
      Important note: when computing again an extension, all extensions that depend on it
      will be automatically and silently deleted to keep a coherent data.
      
      
      Parameters
      ----------
      extensions : dict
          Keys are extension_names and values are params.
      save : bool, default: True
          It the extension can be saved then it is saved.
          If not then the extension will only live in memory as long as the object is deleted.
          save=False is convenient to try some parameters without changing an already saved extension.
      
      Returns
      -------
      No return
      
      Examples
      --------
      
      >>> sorting_analyzer.compute({"waveforms": {"ms_before": 1.2}, "templates" : {"operators": ["average", "std", ]} })
      >>> sorting_analyzer.compute_several_extensions({"waveforms": {"ms_before": 1.2}, "templates" : {"operators": ["average", "std"]}})
  Method: copy(self)
    Docstring:
      Create a a copy of SortingAnalyzer with format "memory".
  Method: delete_extension(self, extension_name) -> 'None'
    Docstring:
      Delete the extension from the dict and also in the persistent zarr or folder.
  Method: get_channel_locations(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_computable_extensions(self)
    Docstring:
      Get all extensions that can be computed by the analyzer.
  Method: get_default_extension_params(self, extension_name: 'str') -> 'dict'
    Docstring:
      Get the default params for an extension.
      
      Parameters
      ----------
      extension_name : str
          The extension name
      
      Returns
      -------
      default_params : dict
          The default parameters for the extension
  Method: get_dtype(self)
    Docstring:
      None
  Method: get_extension(self, extension_name: 'str')
    Docstring:
      Get a AnalyzerExtension.
      If not loaded then load is automatic.
      
      Return None if the extension is not computed yet (this avoids the use of has_extension() and then get it)
  Method: get_loaded_extension_names(self)
    Docstring:
      Return the loaded or already computed extensions names.
  Method: get_num_channels(self) -> 'int'
    Docstring:
      None
  Method: get_num_samples(self, segment_index: 'Optional[int]' = None) -> 'int'
    Docstring:
      None
  Method: get_num_segments(self) -> 'int'
    Docstring:
      None
  Method: get_num_units(self) -> 'int'
    Docstring:
      None
  Method: get_probe(self)
    Docstring:
      None
  Method: get_probegroup(self)
    Docstring:
      None
  Method: get_recording_property(self, key) -> 'np.ndarray'
    Docstring:
      None
  Method: get_saved_extension_names(self)
    Docstring:
      Get extension names saved in folder or zarr that can be loaded.
      This do not load data, this only explores the directory.
  Method: get_sorting_property(self, key) -> 'np.ndarray'
    Docstring:
      None
  Method: get_sorting_provenance(self)
    Docstring:
      Get the original sorting if possible otherwise return None
  Method: get_total_duration(self) -> 'float'
    Docstring:
      None
  Method: get_total_samples(self) -> 'int'
    Docstring:
      None
  Method: has_extension(self, extension_name: 'str') -> 'bool'
    Docstring:
      Check if the extension exists in memory (dict) or in the folder or in zarr.
  Method: has_recording(self) -> 'bool'
    Docstring:
      None
  Method: has_temporary_recording(self) -> 'bool'
    Docstring:
      None
  Method: is_filtered(self) -> 'bool'
    Docstring:
      None
  Method: is_read_only(self) -> 'bool'
    Docstring:
      None
  Method: is_sparse(self) -> 'bool'
    Docstring:
      None
  Method: load_all_saved_extension(self)
    Docstring:
      Load all saved extensions in memory.
  Method: load_extension(self, extension_name: 'str')
    Docstring:
      Load an extension from a folder or zarr into the `ResultSorting.extensions` dict.
      
      Parameters
      ----------
      extension_name : str
          The extension name.
      
      Returns
      -------
      ext_instance:
          The loaded instance of the extension
  Method: merge_units(self, merge_unit_groups, new_unit_ids=None, censor_ms=None, merging_mode='soft', sparsity_overlap=0.75, new_id_strategy='append', return_new_unit_ids=False, format='memory', folder=None, verbose=False, **job_kwargs) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with a list of merges that have to be achieved.
      Merges units by creating a new SortingAnalyzer object with the appropriate merges
      
      Extensions are also updated to display the merged `unit_ids`.
      
      Parameters
      ----------
      merge_unit_groups : list/tuple of lists/tuples
          A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
          but it can also have more (merge multiple units at once).
      new_unit_ids : None | list, default: None
          A new unit_ids for merged units. If given, it needs to have the same length as `merge_unit_groups`. If None,
          merged units will have the first unit_id of every lists of merges
      censor_ms : None | float, default: None
          When merging units, any spikes violating this refractory period will be discarded. If None all units are kept
      merging_mode : ["soft", "hard"], default: "soft"
          How merges are performed. If the `merge_mode` is "soft" , merges will be approximated, with no reloading of the
          waveforms. This will lead to approximations. If `merge_mode` is "hard", recomputations are accurately performed,
          reloading waveforms if needed
      sparsity_overlap : float, default 0.75
          The percentage of overlap that units should share in order to accept merges. If this criteria is not
          achieved, soft merging will not be possible and an error will be raised
      new_id_strategy : "append" | "take_first", default: "append"
          The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
      
              * "append" : new_units_ids will be added at the end of max(sorting.unit_ids)
              * "take_first" : new_unit_ids will be the first unit_id of every list of merges
      return_new_unit_ids : bool, default False
          Alse return new_unit_ids which are the ids of the new units.
      folder : Path | None, default: None
          The new folder where the analyzer with merged units is copied for `format` "binary_folder" or "zarr"
      format : "memory" | "binary_folder" | "zarr", default: "memory"
          The format of SortingAnalyzer
      verbose : bool, default: False
          Whether to display calculations (such as sparsity estimation)
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create `SortingAnalyzer` with the selected units
  Method: remove_units(self, remove_unit_ids, format='memory', folder=None) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with removal of a subset of units.
      Filters units by creating a new sorting analyzer object in a new folder.
      
      Extensions are also updated to remove the unit ids.
      
      Parameters
      ----------
      remove_unit_ids : list or array
          The unit ids to remove in the new SortingAnalyzer object.
      format : "memory" | "binary_folder" | "zarr" , default: "memory"
          The format of the returned SortingAnalyzer.
      folder : Path or None, default: None
          The new folder where the analyzer without removed units is copied if `format`
          is "binary_folder" or "zarr"
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create sorting_analyzer with the selected units
  Method: save_as(self, format='memory', folder=None, backend_options=None) -> "'SortingAnalyzer'"
    Docstring:
      Save SortingAnalyzer object into another format.
      Uselful for memory to zarr or memory to binary.
      
      Note that the recording provenance or sorting provenance can be lost.
      
      Mainly propagates the copied sorting and recording properties.
      
      Parameters
      ----------
      folder : str | Path | None, default: None
          The output folder if `format` is "zarr" or "binary_folder"
      format : "memory" | "binary_folder" | "zarr", default: "memory"
          The new backend format to use
      backend_options : dict | None, default: None
          Keyword arguments for the backend specified by format. It can contain the:
      
              * storage_options: dict | None (fsspec storage options)
              * saving_options: dict | None (additional saving options for creating and saving datasets, e.g. compression/filters for zarr)
  Method: select_units(self, unit_ids, format='memory', folder=None) -> "'SortingAnalyzer'"
    Docstring:
      This method is equivalent to `save_as()` but with a subset of units.
      Filters units by creating a new sorting analyzer object in a new folder.
      
      Extensions are also updated to filter the selected unit ids.
      
      Parameters
      ----------
      unit_ids : list or array
          The unit ids to keep in the new SortingAnalyzer object
      format : "memory" | "binary_folder" | "zarr" , default: "memory"
          The format of the returned SortingAnalyzer.
      folder : Path | None, deafult: None
          The new folder where the analyzer with selected units is copied if `format` is
          "binary_folder" or "zarr"
      
      Returns
      -------
      analyzer :  SortingAnalyzer
          The newly create sorting_analyzer with the selected units
  Method: set_sorting_property(self, key, values: 'list | np.ndarray | tuple', ids: 'list | np.ndarray | tuple | None' = None, missing_value: 'Any' = None, save: 'bool' = True) -> 'None'
    Docstring:
      Set property vector for unit ids.
      
      If the SortingAnalyzer backend is in memory, the property will be only set in memory.
      If the SortingAnalyzer backend is `binary_folder` or `zarr`, the property will also
      be saved to to the backend.
      
      Parameters
      ----------
      key : str
          The property name
      values : np.array
          Array of values for the property
      ids : list/np.array, default: None
          List of subset of ids to set the values.
          if None all the ids are set or changed
      missing_value : Any, default: None
          In case the property is set on a subset of values ("ids" not None),
          This argument specifies how to fill missing values.
          The `missing_value` is required for types int and unsigned int.
      save : bool, default: True
          If True, the property is saved to the backend if possible.
  Method: set_temporary_recording(self, recording: 'BaseRecording', check_dtype: 'bool' = True)
    Docstring:
      Sets a temporary recording object. This function can be useful to temporarily set
      a "cached" recording object that is not saved in the SortingAnalyzer object to speed up
      computations. Upon reloading, the SortingAnalyzer object will try to reload the recording
      from the original location in a lazy way.
      
      
      Parameters
      ----------
      recording : BaseRecording
          The recording object to set as temporary recording.
      check_dtype : bool, default: True
          If True, check that the dtype of the temporary recording is the same as the original recording.

Class: SpikeVectorSortingSegment
  Docstring:
    A sorting segment that stores spike times as a spike vector.
  __init__(self, spikes, segment_index, unit_ids)
  Method: get_unit_spike_train(self, unit_id, start_frame, end_frame)
    Docstring:
      Get the spike train for a unit.
      
      Parameters
      ----------
      unit_id
      start_frame : int, default: None
      end_frame : int, default: None
      
      Returns
      -------
      np.ndarray

Class: SplitSegmentSorting
  Docstring:
    Splits a sorting with a single segment to multiple segments
    based on the given list of recordings (must be in order)
    
    Parameters
    ----------
    parent_sorting : BaseSorting
        Sorting with a single segment (e.g. from sorting concatenated recording)
    recording_or_recording_list : list of recordings, ConcatenateSegmentRecording, or None, default: None
        If list of recordings, uses the lengths of those recordings to split the sorting
        into smaller segments
        If ConcatenateSegmentRecording, uses the associated list of recordings to split
        the sorting into smaller segments
        If None, looks for the recording associated with the sorting
  __init__(self, parent_sorting: 'BaseSorting', recording_or_recording_list=None)

Class: Templates
  Docstring:
    A class to represent spike templates, which can be either dense or sparse.
    
    Parameters
    ----------
    templates_array : np.ndarray
        Array containing the templates data.
    sampling_frequency : float
        Sampling frequency of the templates.
    nbefore : int
        Number of samples before the spike peak.
    sparsity_mask : np.ndarray or None, default: None
        Boolean array indicating the sparsity pattern of the templates.
        If `None`, the templates are considered dense.
    channel_ids : np.ndarray, optional default: None
        Array of channel IDs. If `None`, defaults to an array of increasing integers.
    unit_ids : np.ndarray, optional default: None
        Array of unit IDs. If `None`, defaults to an array of increasing integers.
    probe: Probe, default: None
        A `probeinterface.Probe` object
    is_scaled : bool, optional default: True
        If True, it means that the templates are in uV, otherwise they are in raw ADC values.
    check_for_consistent_sparsity : bool, optional default: None
        When passing a sparsity_mask, this checks that the templates array is also sparse and that it matches the
        structure of the sparsity_mask. If False, this check is skipped.
    
    The following attributes are available after construction:
    
    Attributes
    ----------
    num_units : int
        Number of units in the templates. Automatically determined from `templates_array`.
    num_samples : int
        Number of samples per template. Automatically determined from `templates_array`.
    num_channels : int
        Number of channels in the templates. Automatically determined from `templates_array` or `sparsity_mask`.
    nafter : int
        Number of samples after the spike peak. Calculated as `num_samples - nbefore - 1`.
    ms_before : float
        Milliseconds before the spike peak. Calculated from `nbefore` and `sampling_frequency`.
    ms_after : float
        Milliseconds after the spike peak. Calculated from `nafter` and `sampling_frequency`.
    sparsity : ChannelSparsity, optional
        Object representing the sparsity pattern of the templates. Calculated from `sparsity_mask`.
        If `None`, the templates are considered dense.
  __init__(self, templates_array: 'np.ndarray', sampling_frequency: 'float', nbefore: 'int', is_scaled: 'bool' = True, sparsity_mask: 'np.ndarray' = None, channel_ids: 'np.ndarray' = None, unit_ids: 'np.ndarray' = None, probe: 'Probe' = None, check_for_consistent_sparsity: 'bool' = True) -> None
  Method: add_templates_to_zarr_group(self, zarr_group: "'zarr.Group'") -> 'None'
    Docstring:
      Adds a serialized version of the object to a given Zarr group.
      
      It is the inverse of the `from_zarr_group` method.
      
      Parameters
      ----------
      zarr_group : zarr.Group
          The Zarr group to which the template object will be serialized.
      
      Notes
      -----
      This method will create datasets within the Zarr group for `templates_array`,
      `channel_ids`, and `unit_ids`. It will also add `sampling_frequency` and `nbefore`
      as attributes to the group. If `sparsity_mask` and `probe` are not None, they will
      be included as a dataset and a subgroup, respectively.
      
      The `templates_array` dataset is saved with a chunk size that has a single unit per chunk
      to optimize read/write operations for individual units.
  Method: are_templates_sparse(self) -> 'bool'
    Docstring:
      None
  Method: from_zarr(folder_path: 'str | Path') -> "'Templates'"
    Docstring:
      Deserialize the Templates object from a Zarr file located at the given folder path.
      
      Parameters
      ----------
      folder_path : str | Path
          The path to the folder where the Zarr file is located.
      
      Returns
      -------
      Templates
          An instance of Templates initialized with data from the Zarr file.
  Method: get_channel_locations(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_dense_templates(self) -> 'np.ndarray'
    Docstring:
      None
  Method: get_one_template_dense(self, unit_index)
    Docstring:
      None
  Method: select_channels(self, channel_ids) -> 'Templates'
    Docstring:
      Return a new Templates object with only the selected channels.
      This operation can be useful to remove bad channels for hybrid recording
      generation.
      
      Parameters
      ----------
      channel_ids : list
          List of channel IDs to select.
  Method: select_units(self, unit_ids) -> 'Templates'
    Docstring:
      Return a new Templates object with only the selected units.
      
      Parameters
      ----------
      unit_ids : list
          List of unit IDs to select.
  Method: to_dict(self)
    Docstring:
      None
  Method: to_json(self)
    Docstring:
      None
  Method: to_sparse(self, sparsity)
    Docstring:
      None
  Method: to_zarr(self, folder_path: 'str | Path') -> 'None'
    Docstring:
      Saves the object's data to a Zarr file in the specified folder.
      
      Use the `add_templates_to_zarr_group` method to serialize the object to a Zarr group and then
      save the group to a Zarr file.
      
      Parameters
      ----------
      folder_path : str | Path
          The path to the folder where the Zarr data will be saved.

Class: UnitsAggregationSorting
  Docstring:
    Aggregates units of multiple sortings into a single sorting object
    
    Parameters
    ----------
    sorting_list: list
        List of BaseSorting objects to aggregate
    renamed_unit_ids: array-like
        If given, unit ids are renamed as provided. If None, unit ids are sequential integers.
    
    Returns
    -------
    aggregate_sorting: UnitsAggregationSorting
        The aggregated sorting object
  __init__(self, sorting_list, renamed_unit_ids=None)

Class: UnitsSelectionSorting
  Docstring:
    Class that handles slicing of a Sorting object based on a list of unit_ids.
    
    Do not use this class directly but use `sorting.select_units(...)`
  __init__(self, parent_sorting, unit_ids=None, renamed_unit_ids=None)

Class: ZarrRecordingExtractor
  Docstring:
    RecordingExtractor for a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root folder. This can be a local path or a remote path (s3:// or gcs://).
        If the path is a remote path, the storage_options can be provided to specify credentials.
        If the remote path is not accessible and backend_options is not provided,
        the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    
    Returns
    -------
    recording : ZarrRecordingExtractor
        The recording Extractor
  __init__(self, folder_path: 'Path | str', storage_options: 'dict | None' = None)
  Method: write_recording(recording: 'BaseRecording', folder_path: 'str | Path', storage_options: 'dict | None' = None, **kwargs)
    Docstring:
      None

Class: ZarrSortingExtractor
  Docstring:
    SortingExtractor for a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root file. This can be a local path or a remote path (s3:// or gcs://).
        If the path is a remote path, the storage_options can be provided to specify credentials.
        If the remote path is not accessible and backend_options is not provided,
        the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    zarr_group : str or None, default: None
        Optional zarr group path to load the sorting from. This can be used when the sorting is not stored at the root, but in sub group.
    Returns
    -------
    sorting : ZarrSortingExtractor
        The sorting Extractor
  __init__(self, folder_path: 'Path | str', storage_options: 'dict | None' = None, zarr_group: 'str | None' = None)
  Method: write_sorting(sorting: 'BaseSorting', folder_path: 'str | Path', storage_options: 'dict | None' = None, **kwargs)
    Docstring:
      Write a sorting extractor to zarr format.

Function: add_synchrony_to_sorting(sorting, sync_event_ratio=0.3, seed=None)
  Docstring:
    Generates sorting object with added synchronous events from an existing sorting objects.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting object.
    sync_event_ratio : float, default: 0.3
        The ratio of added synchronous spikes with respect to the total number of spikes.
        E.g., 0.5 means that the final sorting will have 1.5 times number of spikes, and all the extra
        spikes are synchronous (same sample_index), but on different units (not duplicates).
    seed : int, default: None
        The random seed.
    
    
    Returns
    -------
    sorting : TransformSorting
        The sorting object, keeping track of added spikes.

Function: aggregate_channels(recording_list_or_dict=None, renamed_channel_ids=None, recording_list=None)
  Docstring:
    Aggregates channels of multiple recording into a single recording object
    
    Parameters
    ----------
    recording_list_or_dict: list | dict
        List or dict of BaseRecording objects to aggregate.
    renamed_channel_ids: array-like
        If given, channel ids are renamed as provided.
    
    Returns
    -------
    aggregate_recording: ChannelsAggregationRecording
        The aggregated recording object

Class: aggregate_units
  Docstring:
    Aggregates units of multiple sortings into a single sorting object
    
    Parameters
    ----------
    sorting_list: list
        List of BaseSorting objects to aggregate
    renamed_unit_ids: array-like
        If given, unit ids are renamed as provided. If None, unit ids are sequential integers.
    
    Returns
    -------
    aggregate_sorting: UnitsAggregationSorting
        The aggregated sorting object
  __init__(self, sorting_list, renamed_unit_ids=None)

Class: append_recordings
  Docstring:
    Takes as input a list of parent recordings each with multiple segments and
    returns a single multi-segment recording that "appends" all segments from
    all parent recordings.
    
    For instance, given one recording with 2 segments and one recording with 3 segments,
    this class will give one recording with 5 segments
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, sampling_frequency_max_diff=0)

Class: append_sortings
  Docstring:
    Return a sorting that "append" all segments from all sorting
    into one sorting multi segment.
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, sampling_frequency_max_diff=0)

Function: apply_merges_to_sorting(sorting, merge_unit_groups, new_unit_ids=None, censor_ms=None, return_extra=False, new_id_strategy='append')
  Docstring:
    Apply a resolved representation of the merges to a sorting object.
    
    This function is not lazy and creates a new NumpySorting with a compact spike_vector as fast as possible.
    
    If `censor_ms` is not None, duplicated spikes violating the `censor_ms` refractory period are removed.
    
    Optionally, the boolean mask of kept spikes is returned.
    
    Parameters
    ----------
    sorting : Sorting
        The Sorting object to apply merges.
    merge_unit_groups : list/tuple of lists/tuples
        A list of lists for every merge group. Each element needs to have at least two elements (two units to merge),
        but it can also have more (merge multiple units at once).
    new_unit_ids : list | None, default: None
        A new unit_ids for merged units. If given, it needs to have the same length as `merge_unit_groups`. If None,
        merged units will have the first unit_id of every lists of merges.
    censor_ms: float | None, default: None
        When applying the merges, should be discard consecutive spikes violating a given refractory per
    return_extra : bool, default: False
        If True, also return also a boolean mask of kept spikes and new_unit_ids.
    new_id_strategy : "append" | "take_first", default: "append"
        The strategy that should be used, if `new_unit_ids` is None, to create new unit_ids.
    
            * "append" : new_units_ids will be added at the end of max(sorging.unit_ids)
            * "take_first" : new_unit_ids will be the first unit_id of every list of merges
    
    Returns
    -------
    sorting :  The new Sorting object
        The newly create sorting with the merged units
    keep_mask : numpy.array
        A boolean mask, if censor_ms is not None, telling which spike from the original spike vector
        has been kept, given the refractory period violations (None if censor_ms is None)

Function: compute_sparsity(templates_or_sorting_analyzer: "'Templates | SortingAnalyzer'", noise_levels: 'np.ndarray | None' = None, method: "'radius' | 'best_channels' | 'closest_channels' | 'snr' | 'amplitude' | 'energy' | 'by_property' | 'ptp'" = 'radius', peak_sign: "'neg' | 'pos' | 'both'" = 'neg', num_channels: 'int | None' = 5, radius_um: 'float | None' = 100.0, threshold: 'float | None' = 5, by_property: 'str | None' = None, amplitude_mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum') -> 'ChannelSparsity'
  Docstring:
    Compute channel sparsity from a `SortingAnalyzer` for each template with several methods.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object.
        Some methods accept both objects ("best_channels", "radius", )
        Other methods require only SortingAnalyzer because internally the recording is needed.
    
    
    method : str
        * "best_channels" : N best channels with the largest amplitude. Use the "num_channels" argument to specify the
                           number of channels.
        * "closest_channels" : N closest channels according to the distance. Use the "num_channels" argument to specify the
                           number of channels.
        * "radius" : radius around the best channel. Use the "radius_um" argument to specify the radius in um.
        * "snr" : threshold based on template signal-to-noise ratio. Use the "threshold" argument
                 to specify the SNR threshold (in units of noise levels) and the "amplitude_mode" argument
                 to specify the mode to compute the amplitude of the templates.
        * "amplitude" : threshold based on the amplitude values on every channels. Use the "threshold" argument
                     to specify the ptp threshold (in units of amplitude) and the "amplitude_mode" argument
                     to specify the mode to compute the amplitude of the templates.
        * "energy" : threshold based on the expected energy that should be present on the channels,
                    given their noise levels. Use the "threshold" argument to specify the energy threshold
                    (in units of noise levels)
        * "by_property" : sparsity is given by a property of the recording and sorting (e.g. "group").
                         In this case the sparsity for each unit is given by the channels that have the same property
                         value as the unit. Use the "by_property" argument to specify the property name.
        * "ptp: : deprecated, use the 'snr' method with the 'peak_to_peak' amplitude mode instead.
    
    peak_sign : "neg" | "pos" | "both"
        Sign of the template to compute best channels.
    num_channels : int
        Number of channels for "best_channels" method.
    radius_um : float
        Radius in um for "radius" method.
    threshold : float
        Threshold for "snr", "energy" (in units of noise levels) and "ptp" methods (in units of amplitude).
        For the "snr" method, the template amplitude mode is controlled by the "amplitude_mode" argument.
    amplitude_mode : "extremum" | "at_index" | "peak_to_peak"
        Mode to compute the amplitude of the templates for the "snr", "amplitude", and "best_channels" methods.
    by_property : object
        Property name for "by_property" method.
    
    
    Returns
    -------
    sparsity : ChannelSparsity
        The estimated sparsity

Class: concatenate_recordings
  Docstring:
    Return a recording that "concatenates" all segments from all parent recordings
    into one recording with a single segment. The operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    recording_list : list of BaseRecording
        A list of recordings
    ignore_times: bool, default: True
        If True, time information (t_start, time_vector) is ignored when concatenating recordings
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across recordings
  __init__(self, recording_list, ignore_times=True, sampling_frequency_max_diff=0)

Class: concatenate_sortings
  Docstring:
    Return a sorting that "concatenates" all segments from all sorting
    into one sorting with a single segment. This operation is lazy.
    
    For instance, given one recording with 2 segments and one recording with
    3 segments, this class will give one recording with one large segment
    made by concatenating the 5 segments. The returned spike times (originating
    from each segment) are returned as relative to the start of the concatenated segment.
    
    Time information is lost upon concatenation. By default `ignore_times` is True.
    If it is False, you get an error unless:
    
      * all segments DO NOT have times, AND
      * all segment have t_start=None
    
    Parameters
    ----------
    sorting_list : list of BaseSorting
        A list of sortings. If `total_samples_list` is not provided, all
        sortings should have an assigned recording.  Otherwise, all sortings
        should be monosegments.
    total_samples_list : list[int] or None, default: None
        If the sortings have no assigned recording, the total number of samples
        of each of the concatenated (monosegment) sortings is pulled from this
        list.
    ignore_times : bool, default: True
        If True, time information (t_start, time_vector) is ignored
        when concatenating the sortings' assigned recordings.
    sampling_frequency_max_diff : float, default: 0
        Maximum allowed difference of sampling frequencies across sortings
  __init__(self, sorting_list, total_samples_list=None, ignore_times=True, sampling_frequency_max_diff=0)
  Method: get_num_samples(self, segment_index=None)
    Docstring:
      Overrides the BaseSorting method, which requires a recording.

Function: create_extractor_from_new_recording(new_recording)
  Docstring:
    None

Function: create_extractor_from_new_sorting(new_sorting)
  Docstring:
    None

Function: create_recording_from_old_extractor(oldapi_recording_extractor) -> 'OldToNewRecording'
  Docstring:
    None

Function: create_sorting_analyzer(sorting, recording, format='memory', folder=None, sparse=True, sparsity=None, return_scaled=True, overwrite=False, backend_options=None, **sparsity_kwargs) -> "'SortingAnalyzer'"
  Docstring:
    Create a SortingAnalyzer by pairing a Sorting and the corresponding Recording.
    
    This object will handle a list of AnalyzerExtension for all the post processing steps like: waveforms,
    templates, unit locations, spike locations, quality metrics ...
    
    This object will be also use used for plotting purpose.
    
    
    Parameters
    ----------
    sorting : Sorting
        The sorting object
    recording : Recording
        The recording object
    folder : str or Path or None, default: None
        The folder where analyzer is cached
    format : "memory | "binary_folder" | "zarr", default: "memory"
        The mode to store analyzer. If "folder", the analyzer is stored on disk in the specified folder.
        The "folder" argument must be specified in case of mode "folder".
        If "memory" is used, the analyzer is stored in RAM. Use this option carefully!
    sparse : bool, default: True
        If True, then a sparsity mask is computed using the `estimate_sparsity()` function using
        a few spikes to get an estimate of dense templates to create a ChannelSparsity object.
        Then, the sparsity will be propagated to all ResultExtention that handle sparsity (like wavforms, pca, ...)
        You can control `estimate_sparsity()` : all extra arguments are propagated to it (included job_kwargs)
    sparsity : ChannelSparsity or None, default: None
        The sparsity used to compute exensions. If this is given, `sparse` is ignored.
    return_scaled : bool, default: True
        All extensions that play with traces will use this global return_scaled : "waveforms", "noise_levels", "templates".
        This prevent return_scaled being differents from different extensions and having wrong snr for instance.
    overwrite: bool, default: False
        If True, overwrite the folder if it already exists.
    backend_options : dict | None, default: None
        Keyword arguments for the backend specified by format. It can contain the:
    
            * storage_options: dict | None (fsspec storage options)
            * saving_options: dict | None (additional saving options for creating and saving datasets, e.g. compression/filters for zarr)
    
    sparsity_kwargs : keyword arguments
    
    Returns
    -------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object
    
    Examples
    --------
    >>> import spikeinterface as si
    
    >>> # Create dense analyzer and save to disk with binary_folder format.
    >>> sorting_analyzer = si.create_sorting_analyzer(sorting, recording, format="binary_folder", folder="/path/to_my/result")
    
    >>> # Can be reload
    >>> sorting_analyzer = si.load_sorting_analyzer(folder="/path/to_my/result")
    
    >>> # Can run extension
    >>> sorting_analyzer = si.compute("unit_locations", ...)
    
    >>> # Can be copy to another format (extensions are propagated)
    >>> sorting_analyzer2 = sorting_analyzer.save_as(format="memory")
    >>> sorting_analyzer3 = sorting_analyzer.save_as(format="zarr", folder="/path/to_my/result.zarr")
    
    >>> # Can make a copy with a subset of units (extensions are propagated for the unit subset)
    >>> sorting_analyzer4 = sorting_analyzer.select_units(unit_ids=sorting.units_ids[:5], format="memory")
    >>> sorting_analyzer5 = sorting_analyzer.select_units(unit_ids=sorting.units_ids[:5], format="binary_folder", folder="/result_5units")
    
    Notes
    -----
    
    By default creating a SortingAnalyzer can be slow because the sparsity is estimated by default.
    In some situation, sparsity is not needed, so to make it fast creation, you need to turn
    sparsity off (or give external sparsity) like this.

Function: create_sorting_from_old_extractor(oldapi_sorting_extractor) -> 'OldToNewSorting'
  Docstring:
    None

Function: create_sorting_npz(num_seg, file_path)
  Docstring:
    Create a NPZ sorting file.
    
    Parameters
    ----------
    num_seg : int
        The number of segments.
    file_path : str | Path
        The file path to save the NPZ file.

Function: download_dataset(repo: 'str' = 'https://gin.g-node.org/NeuralEnsemble/ephy_testing_data', remote_path: 'str' = 'mearec/mearec_test_10s.h5', local_folder: 'Path | None' = None, update_if_exists: 'bool' = False) -> 'Path'
  Docstring:
    Function to download dataset from a remote repository using a combination of datalad and pooch.
    
    Pooch is designed to download single files from a remote repository.
    Because our datasets in gin sometimes point just to a folder, we still use datalad to download
    a list of all the files in the folder and then use pooch to download them one by one.
    
    Parameters
    ----------
    repo : str, default: "https://gin.g-node.org/NeuralEnsemble/ephy_testing_data"
        The repository to download the dataset from
    remote_path : str, default: "mearec/mearec_test_10s.h5"
        A specific subdirectory in the repository to download (e.g. Mearec, SpikeGLX, etc)
    local_folder : str, optional
        The destination folder / directory to download the dataset to.
        if None, then the path "get_global_dataset_folder()" / f{repo_name} is used (see `spikeinterface.core.globals`)
    update_if_exists : bool, default: False
        Forces re-download of the dataset if it already exists, default: False
    
    Returns
    -------
    Path
        The local path to the downloaded dataset
    
    Notes
    -----
    The reason we use pooch is because have had problems with datalad not being able to download
    data on windows machines. Especially in the CI.
    
    See https://handbook.datalad.org/en/latest/intro/windows.html

Function: ensure_chunk_size(recording, total_memory=None, chunk_size=None, chunk_memory=None, chunk_duration=None, n_jobs=1, **other_kwargs)
  Docstring:
    "chunk_size" is the traces.shape[0] for each worker.
    
    Flexible chunk_size setter with 3 ways:
        * "chunk_size" : is the length in sample for each chunk independently of channel count and dtype.
        * "chunk_memory" : total memory per chunk per worker
        * "total_memory" : total memory over all workers.
    
    If chunk_size/chunk_memory/total_memory are all None then there is no chunk computing
    and the full trace is retrieved at once.
    
    Parameters
    ----------
    chunk_size : int or None
        size for one chunk per job
    chunk_memory : str or None
        must end with "k", "M", "G", etc for decimal units and "ki", "Mi", "Gi", etc for
        binary units. (e.g. "1k", "500M", "2G", "1ki", "500Mi", "2Gi")
    total_memory : str or None
        must end with "k", "M", "G", etc for decimal units and "ki", "Mi", "Gi", etc for
        binary units. (e.g. "1k", "500M", "2G", "1ki", "500Mi", "2Gi")
    chunk_duration : None or float or str
        Units are second if float.
        If str then the str must contain units(e.g. "1s", "500ms")

Function: ensure_n_jobs(recording, n_jobs=1)
  Docstring:
    None

Function: estimate_sparsity(sorting: 'BaseSorting', recording: 'BaseRecording', num_spikes_for_sparsity: 'int' = 100, ms_before: 'float' = 1.0, ms_after: 'float' = 2.5, method: "'radius' | 'best_channels' | 'closest_channels' | 'amplitude' | 'snr' | 'by_property' | 'ptp'" = 'radius', peak_sign: "'neg' | 'pos' | 'both'" = 'neg', radius_um: 'float' = 100.0, num_channels: 'int' = 5, threshold: 'float | None' = 5, amplitude_mode: "'extremum' | 'peak_to_peak'" = 'extremum', by_property: 'str | None' = None, noise_levels: 'np.ndarray | list | None' = None, **job_kwargs)
  Docstring:
    Estimate the sparsity without needing a SortingAnalyzer or Templates object.
    In case the sparsity method needs templates, they are computed on-the-fly.
    For the "snr" method, `noise_levels` must passed with the `noise_levels` argument.
    These can be computed with the `get_noise_levels()` function.
    
    Contrary to the previous implementation:
      * all units are computed in one read of recording
      * it doesn't require a folder
      * it doesn't consume too much memory
      * it uses internally the `estimate_templates_with_accumulator()` which is fast and parallel
    
    Note that the "energy" method is not supported because it requires a `SortingAnalyzer` object.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting
    recording : BaseRecording
        The recording
    num_spikes_for_sparsity : int, default: 100
        How many spikes per units to compute the sparsity
    ms_before : float, default: 1.0
        Cut out in ms before spike time
    ms_after : float, default: 2.5
        Cut out in ms after spike time
    noise_levels : np.array | None, default: None
        Noise levels required for the "snr" and "energy" methods. You can use the
        `get_noise_levels()` function to compute them.
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    sparsity : ChannelSparsity
        The estimated sparsity

Function: estimate_templates(recording: 'BaseRecording', spikes: 'np.ndarray', unit_ids: 'list | np.ndarray', nbefore: 'int', nafter: 'int', operator: 'str' = 'average', return_scaled: 'bool' = True, job_name=None, **job_kwargs)
  Docstring:
    Estimate dense templates with "average" or "median".
    If "average" internally estimate_templates_with_accumulator() is used to saved memory/
    
    Parameters
    ----------
    
    recording: BaseRecording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        Number of samples to cut out before a spike
    nafter: int
        Number of samples to cut out after a spike
    return_scaled: bool, default: True
        If True, the traces are scaled before averaging
    
    Returns
    -------
    templates_array: np.array
        The average templates with shape (num_units, nbefore + nafter, num_channels)

Function: estimate_templates_with_accumulator(recording: 'BaseRecording', spikes: 'np.ndarray', unit_ids: 'list | np.ndarray', nbefore: 'int', nafter: 'int', return_scaled: 'bool' = True, job_name=None, return_std: 'bool' = False, verbose: 'bool' = False, **job_kwargs) -> 'np.ndarray'
  Docstring:
    This is a fast implementation to compute template averages and standard deviations.
    This is useful to estimate sparsity without the need to allocate large waveform buffers.
    The mechanism is pretty simple: it accumulates and sums spike waveforms (and their squared)
    in-place per worker and per unit.
    Note that median and percentiles can't be computed with this method, because they don't support
    the accumulator implementation.
    
    Parameters
    ----------
    recording: BaseRecording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        Number of samples to cut out before a spike
    nafter: int
        Number of samples to cut out after a spike
    return_scaled: bool, default: True
        If True, the traces are scaled before averaging
    return_std: bool, default: False
        If True, the standard deviation is also computed.
    
    Returns
    -------
    templates_array: np.array
        The average templates with shape (num_units, nbefore + nafter, num_channels)

Function: extract_waveforms(recording, sorting, folder=None, mode='folder', precompute_template=('average',), ms_before=1.0, ms_after=2.0, max_spikes_per_unit=500, overwrite=None, return_scaled=True, dtype=None, sparse=True, sparsity=None, sparsity_temp_folder=None, num_spikes_for_sparsity=100, unit_batch_size=None, allow_unfiltered=None, use_relative_path=None, seed=None, load_if_exists=None, **kwargs)
  Docstring:
    This mock the extract_waveforms() in version <= 0.100 to not break old codes but using
    the SortingAnalyzer (version >0.100) internally.
    
    This return a MockWaveformExtractor object that mock the old WaveformExtractor

Function: extract_waveforms_to_buffers(recording, spikes, unit_ids, nbefore, nafter, mode='memmap', return_scaled=False, folder=None, dtype=None, sparsity_mask=None, copy=False, **job_kwargs)
  Docstring:
    Allocate buffers (memmap or or shared memory) and then distribute every waveform into theses buffers.
    
    Same as calling allocate_waveforms_buffers() and then distribute_waveforms_to_buffers().
    
    Important note: for the "shared_memory" mode arrays_info contains reference to
    the shared memmory buffer, this variable must be reference as long as arrays as used.
    And this variable is also returned.
    To avoid this a copy to non shared memmory can be perform at the end.
    
    Parameters
    ----------
    recording: recording
        The recording object
    spikes: 1d numpy array with several fields
        Spikes handled as a unique vector.
        This vector can be obtained with: `spikes = Sorting.to_spike_vector()`
    unit_ids: list ot numpy
        List of unit_ids
    nbefore: int
        N samples before spike
    nafter: int
        N samples after spike
    mode: "memmap" | "shared_memory", default: "memmap"
        The mode to use for the buffer
    return_scaled: bool, default: False
        Scale traces before exporting to buffer or not
    folder: str or path or None, default: None
        In case of memmap mode, folder to save npy files
    dtype: numpy.dtype, default: None
        dtype for waveforms buffer
    sparsity_mask: None or array of bool, default: None
        If not None shape must be must be (len(unit_ids), len(channel_ids))
    copy: bool, default: False
        If True, the output shared memory object is copied to a numpy standard array.
        If copy=False then arrays_info is also return. Please keep in mind that arrays_info
        need to be referenced as long as waveforms_by_units will be used otherwise it will be very hard to debug.
        Also when copy=False the SharedMemory will need to be unlink manually
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    waveforms_by_units: dict of arrays
        Arrays for all units (memmap or shared_memmep)
    
    arrays_info: dict of info
        Optionally return in case of shared_memory if copy=False.
        Dictionary to "construct" array in workers process (memmap file or sharemem info)

Function: fix_job_kwargs(runtime_job_kwargs)
  Docstring:
    None

Function: generate_ground_truth_recording(durations=[10.0], sampling_frequency=25000.0, num_channels=4, num_units=10, sorting=None, probe=None, generate_probe_kwargs={'num_columns': 2, 'xpitch': 20, 'ypitch': 20, 'contact_shapes': 'circle', 'contact_shape_params': {'radius': 6}}, templates=None, ms_before=1.0, ms_after=3.0, upsample_factor=None, upsample_vector=None, generate_sorting_kwargs={'firing_rates': 15, 'refractory_period_ms': 4.0}, noise_kwargs={'noise_levels': 5.0, 'strategy': 'on_the_fly'}, generate_unit_locations_kwargs={'margin_um': 10.0, 'minimum_z': 5.0, 'maximum_z': 50.0, 'minimum_distance': 20}, generate_templates_kwargs=None, dtype='float32', seed=None)
  Docstring:
    Generate a recording with spike given a probe+sorting+templates.
    
    Parameters
    ----------
    durations : list[float], default: [10.]
        Durations in seconds for all segments.
    sampling_frequency : float, default: 25000.0
        Sampling frequency.
    num_channels : int, default: 4
        Number of channels, not used when probe is given.
    num_units : int, default: 10
        Number of units,  not used when sorting is given.
    sorting : Sorting | None
        An external sorting object. If not provide, one is genrated.
    probe : Probe | None
        An external Probe object. If not provided a probe is generated using generate_probe_kwargs.
    generate_probe_kwargs : dict
        A dict to constuct the Probe using :py:func:`probeinterface.generate_multi_columns_probe()`.
    templates : np.array | None
        The templates of units.
        If None they are generated.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce jitter.
    ms_before : float, default: 1.5
        Cut out in ms before spike peak.
    ms_after : float, default: 3.0
        Cut out in ms after spike peak.
    upsample_factor : None | int, default: None
        A upsampling factor used only when templates are not provided.
    upsample_vector : np.array | None
        Optional the upsample_vector can given. This has the same shape as spike_vector
    generate_sorting_kwargs : dict
        When sorting is not provide, this dict is used to generated a Sorting.
    noise_kwargs : dict
        Dict used to generated the noise with NoiseGeneratorRecording.
    generate_unit_locations_kwargs : dict
        Dict used to generated template when template not provided.
    generate_templates_kwargs : dict
        Dict used to generated template when template not provided.
    dtype : np.dtype, default: "float32"
        The dtype of the recording.
    seed : int | None
        Seed for random initialization.
        If None a diffrent Recording is generated at every call.
        Note: even with None a generated recording keep internaly a seed to regenerate the same signal after dump/load.
    
    Returns
    -------
    recording : Recording
        The generated recording extractor.
    sorting : Sorting
        The generated sorting extractor.

Function: generate_recording(num_channels: 'int' = 2, sampling_frequency: 'float' = 30000.0, durations: 'list[float]' = [5.0, 2.5], set_probe: 'bool | None' = True, ndim: 'int | None' = 2, seed: 'int | None' = None) -> 'NumpySorting'
  Docstring:
    Generate a lazy recording object.
    Useful for testing API and algos.
    
    Parameters
    ----------
    num_channels : int, default: 2
        The number of channels in the recording.
    sampling_frequency : float, default: 30000. (in Hz)
        The sampling frequency of the recording, default: 30000.
    durations : list[float], default: [5.0, 2.5]
        The duration in seconds of each segment in the recording.
        The number of segments is determined by the length of this list.
    set_probe : bool, default: True
        If true, attaches probe to the returned `Recording`
    ndim : int, default: 2
        The number of dimensions of the probe, default: 2. Set to 3 to make 3 dimensional probe.
    seed : int | None, default: None
        A seed for the np.ramdom.default_rng function
    
    Returns
    -------
    NumpyRecording
        Returns a NumpyRecording object with the specified parameters.

Function: generate_recording_by_size(full_traces_size_GiB: 'float', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated') -> 'NoiseGeneratorRecording'
  Docstring:
    Generate a large lazy recording.
    This is a convenience wrapper around the NoiseGeneratorRecording class where only
    the size in GiB (NOT GB!) is specified.
    
    It is generated with 384 channels and a sampling frequency of 1 Hz. The duration is manipulted to
    produced the desired size.
    
    Seee GeneratorRecording for more details.
    
    Parameters
    ----------
    full_traces_size_GiB : float
        The size in gigabytes (GiB) of the recording.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and consume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computation (random)
    Returns
    -------
    GeneratorRecording
        A lazy random recording with the specified size.

Function: generate_snippets(nbefore=20, nafter=44, num_channels=2, wf_folder=None, sampling_frequency=30000.0, durations=[10.325, 3.5], set_probe=True, ndim=2, num_units=5, empty_units=None, **job_kwargs)
  Docstring:
    Generates a synthetic Snippets object.
    
    Parameters
    ----------
    nbefore : int, default: 20
        Number of samples before the peak.
    nafter : int, default: 44
        Number of samples after the peak.
    num_channels : int, default: 2
        Number of channels.
    wf_folder : str | Path | None, default: None
        Optional folder to save the waveform snippets. If None, snippets are in memory.
    sampling_frequency : float, default: 30000.0
        The sampling frequency of the snippets in Hz.
    ndim : int, default: 2
        The number of dimensions of the probe.
    num_units : int, default: 5
        The number of units.
    empty_units : list | None, default: None
        A list of units that will have no spikes.
    durations : List[float], default: [10.325, 3.5]
        The duration in seconds of each segment in the recording.
        The number of segments is determined by the length of this list.
    set_probe : bool, default: True
        If true, attaches probe to the returned snippets object
    **job_kwargs : dict, default: None
        Job keyword arguments for `snippets_from_sorting`
    
    Returns
    -------
    snippets : NumpySnippets
        The snippets object.
    sorting : NumpySorting
        The associated sorting object.

Function: generate_sorting(num_units=5, sampling_frequency=30000.0, durations=[10.325, 3.5], firing_rates=3.0, empty_units=None, refractory_period_ms=4.0, add_spikes_on_borders=False, num_spikes_per_border=3, border_size_samples=20, seed=None)
  Docstring:
    Generates sorting object with random firings.
    
    Parameters
    ----------
    num_units : int, default: 5
        Number of units.
    sampling_frequency : float, default: 30000.0
        The sampling frequency of the recording in Hz.
    durations : list, default: [10.325, 3.5]
        Duration of each segment in s.
    firing_rates : float, default: 3.0
        The firing rate of each unit (in Hz).
    empty_units : list, default: None
        List of units that will have no spikes. (used for testing mainly).
    refractory_period_ms : float, default: 4.0
        The refractory period in ms
    add_spikes_on_borders : bool, default: False
        If True, spikes will be added close to the borders of the segments.
        This is for testing some post-processing functions when they have
        to deal with border spikes.
    num_spikes_per_border : int, default: 3
        The number of spikes to add close to the borders of the segments.
    border_size_samples : int, default: 20
        The size of the border in samples to add border spikes.
    seed : int, default: None
        The random seed.
    
    Returns
    -------
    sorting : NumpySorting
        The sorting object.

Function: generate_templates(channel_locations, units_locations, sampling_frequency, ms_before, ms_after, seed=None, dtype='float32', upsample_factor=None, unit_params=None, mode='ellipsoid')
  Docstring:
    Generate some templates from the given channel positions and neuron positions.
    
    The implementation is very naive : it generates a mono channel waveform using generate_single_fake_waveform()
    and duplicates this same waveform on all channel given a simple decay law per unit.
    
    
    Parameters
    ----------
    
    channel_locations : np.ndarray
        Channel locations.
    units_locations : np.ndarray
        Must be 3D.
    sampling_frequency : float
        Sampling frequency.
    ms_before : float
        Cut out in ms before spike peak.
    ms_after : float
        Cut out in ms after spike peak.
    seed : int | None
        A seed for random.
    dtype : numpy.dtype, default: "float32"
        Templates dtype
    upsample_factor : int | None, default: None
        If not None then template are generated upsampled by this factor.
        Then a new dimention (axis=3) is added to the template with intermediate inter sample representation.
        This allow easy random jitter by choising a template this new dim
    unit_params : dict[np.array] | dict[float] | dict[tuple] | None, default: None
        An optional dict containing parameters per units.
        Keys are parameter names:
    
            * "alpha": amplitude of the action potential in a.u. (default range: (6'000-9'000))
            * "depolarization_ms": the depolarization interval in ms (default range: (0.09-0.14))
            * "repolarization_ms": the repolarization interval in ms (default range: (0.5-0.8))
            * "recovery_ms": the recovery interval in ms (default range: (1.0-1.5))
            * "positive_amplitude": the positive amplitude in a.u. (default range: (0.05-0.15)) (negative is always -1)
            * "smooth_ms": the gaussian smooth in ms (default range: (0.03-0.07))
            * "spatial_decay": the spatial constant (default range: (20-40))
            * "propagation_speed": mimic a propagation delay with a kind of a "speed" (default range: (250., 350.)).
    
        Values can be:
            * array of the same length of units
            * scalar, then an array is created
            * tuple, then this difine a range for random values.
    mode : "ellipsoid" | "sphere", default: "ellipsoid"
        Method used to calculate the distance between unit and channel location.
        Ellipsoid injects some anisotropy dependent on unit shape, sphere is equivalent
        to Euclidean distance.
    
    mode : "sphere" | "ellipsoid", default: "ellipsoid"
        Mode for how to calculate distances
    
    
    Returns
    -------
    templates: np.array
        The template array with shape
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor) if upsample_factor is not None

Function: get_available_analyzer_extensions()
  Docstring:
    Get all extensions that can be computed by the analyzer.

Function: get_best_job_kwargs()
  Docstring:
    Gives best possible job_kwargs for the platform.
    Currently this function  is from developer experience, but may be adapted in the future.

Function: get_channel_distances(recording)
  Docstring:
    Distance between channel pairs

Function: get_chunk_with_margin(rec_segment, start_frame, end_frame, channel_indices, margin, add_zeros=False, add_reflect_padding=False, window_on_margin=False, dtype=None)
  Docstring:
    Helper to get chunk with margin
    
    The margin is extracted from the recording when possible. If
    at the edge of the recording, no margin is used unless one
    of `add_zeros` or `add_reflect_padding` is True. In the first
    case zero padding is used, in the second case np.pad is called
    with mod="reflect".

Function: get_closest_channels(recording, channel_ids=None, num_channels=None)
  Docstring:
    Get closest channels + distances
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to get closest channels
    channel_ids : list
        List of channels ids to compute there near neighborhood
    num_channels : int, default: None
        Maximum number of neighborhood channels to return
    
    Returns
    -------
    closest_channels_inds : array (2d)
        Closest channel indices in ascending order for each channel id given in input
    dists : array (2d)
        Distance in ascending order for each channel id given in input

Function: get_default_analyzer_extension_params(extension_name: 'str')
  Docstring:
    Get the default params for an extension.
    
    Parameters
    ----------
    extension_name : str
        The extension name
    
    Returns
    -------
    default_params : dict
        The default parameters for the extension

Function: get_default_zarr_compressor(clevel: 'int' = 5)
  Docstring:
    Return default Zarr compressor object for good preformance in int16
    electrophysiology data.
    
    cname: zstd (zstandard)
    clevel: 5
    shuffle: BITSHUFFLE
    
    Parameters
    ----------
    clevel : int, default: 5
        Compression level (higher -> more compressed).
        Minimum 1, maximum 9. By default 5
    
    Returns
    -------
    Blosc.compressor
        The compressor object that can be used with the save to zarr function

Function: get_global_dataset_folder()
  Docstring:
    Get the global dataset folder.

Function: get_global_job_kwargs()
  Docstring:
    Get the global job kwargs.

Function: get_global_tmp_folder()
  Docstring:
    Get the global path temporary folder.

Function: get_noise_levels(recording: "'BaseRecording'", return_scaled: 'bool' = True, method: "Literal['mad', 'std']" = 'mad', force_recompute: 'bool' = False, random_slices_kwargs: 'dict' = {}, **kwargs) -> 'np.ndarray'
  Docstring:
    Estimate noise for each channel using MAD methods.
    You can use standard deviation with `method="std"`
    
    Internally it samples some chunk across segment.
    And then, it uses the MAD estimator (more robust than STD) or the STD on each chunk.
    Finally the average of all MAD/STD values is performed.
    
    The result is cached in a property of the recording, so that the next call on the same
    recording will use the cached result unless `force_recompute=True`.
    
    Parameters
    ----------
    
    recording : BaseRecording
        The recording extractor to get noise levels
    return_scaled : bool
        If True, returned noise levels are scaled to uV
    method : "mad" | "std", default: "mad"
        The method to use to estimate noise levels
    force_recompute : bool
        If True, noise levels are recomputed even if they are already stored in the recording extractor
    random_slices_kwargs : dict
        Options transmited to  get_random_recording_slices(), please read documentation from this
        function for more details.
    
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    noise_levels : array
        Noise levels for each channel

Function: get_random_data_chunks(recording, return_scaled=False, concatenated=True, **random_slices_kwargs)
  Docstring:
    Extract random chunks across segments.
    
    Internally, it uses `get_random_recording_slices()` and retrieves the traces chunk as a list
    or a concatenated unique array.
    
    Please read `get_random_recording_slices()` for more details on parameters.
    
    
    Parameters
    ----------
    recording : BaseRecording
        The recording to get random chunks from
    return_scaled : bool, default: False
        If True, returned chunks are scaled to uV
    num_chunks_per_segment : int, default: 20
        Number of chunks per segment
    concatenated : bool, default: True
        If True chunk are concatenated along time axis
    **random_slices_kwargs : dict
        Options transmited to  get_random_recording_slices(), please read documentation from this
        function for more details.
    
    Returns
    -------
    chunk_list : np.array | list of np.array
        Array of concatenate chunks per segment

Function: get_template_amplitudes(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum', return_scaled: 'bool' = True, abs_value: 'bool' = True)
  Docstring:
    Get amplitude per channel for each unit.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum" : take the peak value (max or min depending on `peak_sign`)
        * "at_index" : take value at `nbefore` index
        * "peak_to_peak" : take the peak-to-peak amplitude
    return_scaled : bool, default True
        The amplitude is scaled or not.
    abs_value : bool = True
        Whether the extremum amplitude should be returned as an absolute value or not
    
    Returns
    -------
    peak_values : dict
        Dictionary with unit ids as keys and template amplitudes as values

Function: get_template_extremum_amplitude(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'at_index', abs_value: 'bool' = True)
  Docstring:
    Computes amplitudes on the best channel.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum": take the peak value (max or min depending on `peak_sign`)
        * "at_index": take value at `nbefore` index
        * "peak_to_peak": take the peak-to-peak amplitude
    abs_value : bool = True
        Whether the extremum amplitude should be returned as an absolute value or not
    
    
    Returns
    -------
    amplitudes : dict
        Dictionary with unit ids as keys and amplitudes as values

Function: get_template_extremum_channel(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg', mode: "'extremum' | 'at_index' | 'peak_to_peak'" = 'extremum', outputs: "'id' | 'index'" = 'id')
  Docstring:
    Compute the channel with the extremum peak for each unit.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    mode : "extremum" | "at_index" | "peak_to_peak", default: "at_index"
        Where the amplitude is computed
        * "extremum" : take the peak value (max or min depending on `peak_sign`)
        * "at_index" : take value at `nbefore` index
        * "peak_to_peak" : take the peak-to-peak amplitude
    outputs : "id" | "index", default: "id"
        * "id" : channel id
        * "index" : channel index
    
    Returns
    -------
    extremum_channels : dict
        Dictionary with unit ids as keys and extremum channels (id or index based on "outputs")
        as values

Function: get_template_extremum_channel_peak_shift(templates_or_sorting_analyzer, peak_sign: "'neg' | 'pos' | 'both'" = 'neg')
  Docstring:
    In some situations spike sorters could return a spike index with a small shift related to the waveform peak.
    This function estimates and return these alignment shifts for the mean template.
    This function is internally used by `compute_spike_amplitudes()` to accurately retrieve the spike amplitudes.
    
    Parameters
    ----------
    templates_or_sorting_analyzer : Templates | SortingAnalyzer
        A Templates or a SortingAnalyzer object
    peak_sign :  "neg" | "pos" | "both"
        Sign of the template to find extremum channels
    
    Returns
    -------
    shifts : dict
        Dictionary with unit ids as keys and shifts as values

Function: inject_some_duplicate_units(sorting, num=4, max_shift=5, ratio=None, seed=None)
  Docstring:
    Inject some duplicate units in a sorting.
    The peak shift can be control in a range.
    
    Parameters
    ----------
    sorting :
        Original sorting.
    num : int, default: 4
        Number of injected units.
    max_shift : int, default: 5
        range of the shift in sample.
    ratio : float | None, default: None
        Proportion of original spike in the injected units.
    seed : int | None, default: None
        Random seed for creating unit peak shifts.
    
    Returns
    -------
    sorting_with_dup: Sorting
        A sorting with more units.

Function: inject_some_split_units(sorting, split_ids: 'list', num_split=2, output_ids=False, seed=None)
  Docstring:
    Inject some split units in a sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        Original sorting.
    split_ids : list
        List of unit_ids to split.
    num_split : int, default: 2
        Number of split units.
    output_ids : bool, default: False
        If True, return the new unit_ids.
    seed : int, default: None
        Random seed.
    
    Returns
    -------
    sorting_with_split : NumpySorting
        A sorting with split units.
    other_ids : dict
        The dictionary with the split unit_ids. Returned only if output_ids is True.

Class: inject_templates
  Docstring:
    Class for creating a recording based on spike timings and templates.
    Can be just the templates or can add to an already existing recording.
    
    Parameters
    ----------
    sorting : BaseSorting
        Sorting object containing all the units and their spike train.
    templates : np.ndarray[n_units, n_samples, n_channels] | np.ndarray[n_units, n_samples, n_oversampling]
        Array containing the templates to inject for all the units.
        Shape can be:
    
            * (num_units, num_samples, num_channels): standard case
            * (num_units, num_samples, num_channels, upsample_factor): case with oversample template to introduce sampling jitter.
    nbefore : list[int] | int | None, default: None
        The number of samples before the peak of the template to align the spike.
        If None, will default to the highest peak.
    amplitude_factor : list[float] | float | None, default: None
        The amplitude of each spike for each unit.
        Can be None (no scaling).
        Can be scalar all spikes have the same factor (certainly useless).
        Can be a vector with same shape of spike_vector of the sorting.
    parent_recording : BaseRecording | None, default: None
        The recording over which to add the templates.
        If None, will default to traces containing all 0.
    num_samples : list[int] | int | None, default: None
        The number of samples in the recording per segment.
        You can use int for mono-segment objects.
    upsample_vector : np.array | None, default: None.
        When templates is 4d we can simulate a jitter.
        Optional the upsample_vector is the jitter index with a number per spike in range 0-templates.shape[3].
    check_borders : bool, default: False
        Checks if the border of the templates are zero.
    
    Returns
    -------
    injected_recording: InjectTemplatesRecording
        The recording with the templates injected.
  __init__(self, sorting: 'BaseSorting', templates: 'np.ndarray', nbefore: 'list[int] | int | None' = None, amplitude_factor: 'list[float] | float | None' = None, parent_recording: 'BaseRecording | None' = None, num_samples: 'list[int] | int | None' = None, upsample_vector: 'np.array | None' = None, check_borders: 'bool' = False) -> 'None'

Function: is_set_global_dataset_folder() -> 'bool'
  Docstring:
    Check if the global path dataset folder has been manually set.

Function: is_set_global_tmp_folder() -> 'bool'
  Docstring:
    Check if the global path temporary folder have been manually set.

Function: load(file_or_folder_or_dict, **kwargs) -> 'BaseExtractor | SortingAnalyzer | Motion | Template'
  Docstring:
    General load function to load a SpikeInterface object.
    
    The function can load:
        - a `Recording` or `Sorting` object from:
            * dictionary
            * json file
            * pkl file
            * binary folder (after `extractor.save(..., format='binary_folder')`)
            * zarr folder (after `extractor.save(..., format='zarr')`)
            * remote zarr folder
        - a `SortingAnalyzer` object from:
            * binary folder
            * zarr folder
            * remote zarr folder
            * WaveformExtractor folder (backward compatibility for v<0.101)
        - a `Motion` object from:
           * folder (after `Motion.save(folder)`)
        - a `Templates` object from:
           * zarr folder (after `Templates.add_templates_to_zarr_group()`)
           * dictionary (after `Templates.to_dict()`)
    
    Parameters
    ----------
    file_or_folder_or_dict : dictionary or folder or file (json, pickle)
        The file path, folder path, or dictionary to load the Recording, Sorting, or SortingAnalyzer from
    kwargs : keyword arguments for various objects, including
        * base_folder: str | Path | bool
            The base folder to make relative paths absolute. Only used to load Recording/Sorting objects.
            If True and file_or_folder_or_dict is a file, the parent folder of the file is used.
        * load_extensions: bool, default: True
            If True, the SortingAnalyzer extensions are loaded. Only used to load SortingAnalyzer objects.
        * storage_options: dict | None, default: None
            The storage options to use when loading the object. Only used to load Recording/Sorting objects.
        * backend_options: dict | None, default: None
            The backend options to use when loading the object. Only used to load SortingAnalyzer objects.
            The dictionary can contain the following keys:
            - storage_options: dict | None (fsspec storage options)
            - saving_options: dict | None (additional saving options for creating and saving datasets)
    
    Returns
    -------
    spikeinterface object: Recording or Sorting or SortingAnalyzer or Motion or Templates
        The loaded spikeinterface object

Function: load_extractor(file_or_folder_or_dict, base_folder=None) -> 'BaseExtractor'
  Docstring:
    None

Function: load_sorting_analyzer(folder, load_extensions=True, format='auto', backend_options=None) -> "'SortingAnalyzer'"
  Docstring:
    Load a SortingAnalyzer object from disk.
    
    Parameters
    ----------
    folder : str or Path
        The folder / zarr folder where the analyzer is stored. If the folder is a remote path stored in the cloud,
        the backend_options can be used to specify credentials. If the remote path is not accessible,
        and backend_options is not provided, the function will try to load the object in anonymous mode (anon=True),
        which enables to load data from open buckets.
    load_extensions : bool, default: True
        Load all extensions or not.
    format : "auto" | "binary_folder" | "zarr"
        The format of the folder.
    backend_options : dict | None, default: None
        The backend options for the backend.
        The dictionary can contain the following keys:
    
            * storage_options: dict | None (fsspec storage options)
            * saving_options: dict | None (additional saving options for creating and saving datasets)
    
    Returns
    -------
    sorting_analyzer : SortingAnalyzer
        The loaded SortingAnalyzer

Function: load_sorting_analyzer_or_waveforms(folder, sorting=None)
  Docstring:
    Load a SortingAnalyzer from either a newly saved SortingAnalyzer folder or an old WaveformExtractor folder.
    
    Parameters
    ----------
    folder: str | Path
        The folder to the sorting analyzer or waveform extractor
    sorting: BaseSorting | None, default: None
        The sorting object to instantiate with the SortingAnalyzer (only used for old WaveformExtractor)
    
    Returns
    -------
    sorting_analyzer: SortingAnalyzer
        The returned SortingAnalyzer.

Function: load_waveforms(folder, with_recording: 'bool' = True, sorting: 'Optional[BaseSorting]' = None, output='MockWaveformExtractor') -> 'MockWaveformExtractor | SortingAnalyzer'
  Docstring:
    This read an old WaveformsExtactor folder (folder or zarr) and convert it into a SortingAnalyzer or MockWaveformExtractor.
    
    It also mimic the old load_waveforms by opening a Sortingresult folder and return a MockWaveformExtractor.
    This later behavior is usefull to no break old code like this in versio >=0.101
    
    >>> # In this example we is a MockWaveformExtractor that behave the same as before
    >>> we = extract_waveforms(..., folder="/my_we")
    >>> we = load_waveforms("/my_we")
    >>> templates = we.get_all_templates()
    
    Parameters
    ----------
    folder: str | Path
        The folder to the waveform extractor (binary or zarr)
    with_recording: bool
        For back-compatibility, ignored
    sorting: BaseSorting | None, default: None
        The sorting object to instantiate with the Waveforms
    output: "MockWaveformExtractor" | "SortingAnalyzer", default: "MockWaveformExtractor"
        The output format
    
    Returns
    -------
    waveforms_or_analyzer: MockWaveformExtractor | SortingAnalyzer
        The returned MockWaveformExtractor or SortingAnalyzer

Class: noise_generator_recording
  Docstring:
    A lazy recording that generates white noise samples if and only if `get_traces` is called.
    
    This done by tiling small noise chunk.
    
    2 strategies to be reproducible across different start/end frame calls:
      * "tile_pregenerated": pregenerate a small noise block and tile it depending the start_frame/end_frame
      * "on_the_fly": generate on the fly small noise chunk and tile then. seed depend also on the noise block.
    
    
    Parameters
    ----------
    num_channels : int
        The number of channels.
    sampling_frequency : float
        The sampling frequency of the recorder.
    durations : list[float]
        The durations of each segment in seconds. Note that the length of this list is the number of segments.
    noise_levels : float | np.array, default: 1.0
        Std of the white noise (if an array, defined by per channels)
    cov_matrix : np.array | None, default: None
        The covariance matrix of the noise
    dtype : np.dtype | str | None, default: "float32"
        The dtype of the recording. Note that only np.float32 and np.float64 are supported.
    seed : int | None, default: None
        The seed for np.random.default_rng.
    strategy : "tile_pregenerated" | "on_the_fly", default: "tile_pregenerated"
        The strategy of generating noise chunk:
          * "tile_pregenerated": pregenerate a noise chunk of noise_block_size sample and repeat it
                                 very fast and cusume only one noise block.
          * "on_the_fly": generate on the fly a new noise block by combining seed + noise block index
                          no memory preallocation but a bit more computaion (random)
    noise_block_size : int, default: 30000
        Size in sample of noise block.
    
    Notes
    -----
    If modifying this function, ensure that only one call to malloc is made per call get_traces to
    maintain the optimized memory profile.
  __init__(self, num_channels: 'int', sampling_frequency: 'float', durations: 'list[float]', noise_levels: 'float | np.array' = 1.0, cov_matrix: 'np.array | None' = None, dtype: 'np.dtype | str | None' = 'float32', seed: 'int | None' = None, strategy: "Literal['tile_pregenerated', 'on_the_fly']" = 'tile_pregenerated', noise_block_size: 'int' = 30000)

Function: normal_pdf(x, mu: 'float' = 0.0, sigma: 'float' = 1.0)
  Docstring:
    Manual implementation of the Normal distribution pdf (probability density function).
    It is about 8 to 10 times faster than scipy.stats.norm.pdf().
    
    Parameters
    ----------
    x: scalar or array
        The x-axis
    mu: float, default: 0.0
        The mean of the Normal distribution.
    sigma: float, default: 1.0
        The standard deviation of the Normal distribution.
    
    Returns
    -------
    normal_pdf: scalar or array (same type as 'x')
        The pdf of the Normal distribution for the given x-axis.

Function: order_channels_by_depth(recording, channel_ids=None, dimensions=('x', 'y'), flip=False)
  Docstring:
    Order channels by depth, by first ordering the x-axis, and then the y-axis.
    
    Parameters
    ----------
    recording : BaseRecording
        The input recording
    channel_ids : list/array or None
        If given, a subset of channels to order locations for
    dimensions : str, tuple, or list, default: ('x', 'y')
        If str, it needs to be 'x', 'y', 'z'.
        If tuple or list, it sorts the locations in two dimensions using lexsort.
        This approach is recommended since there is less ambiguity
    flip : bool, default: False
        If flip is False then the order is bottom first (starting from tip of the probe).
        If flip is True then the order is upper first.
    
    Returns
    -------
    order_f : np.array
        Array with sorted indices
    order_r : np.array
        Array with indices to revert sorting

Function: random_spikes_selection(sorting: 'BaseSorting', num_samples: 'int | None' = None, method: 'str' = 'uniform', max_spikes_per_unit: 'int' = 500, margin_size: 'int | None' = None, seed: 'int | None' = None)
  Docstring:
    This replaces `select_random_spikes_uniformly()`.
    Random spikes selection of spike across per units.
    Can optionally avoid spikes on segment borders if
    margin_size is not None.
    
    Parameters
    ----------
    sorting: BaseSorting
        The sorting object
    num_samples: list of int
        The number of samples per segment.
        Can be retrieved from recording with
        num_samples = [recording.get_num_samples(seg_index) for seg_index in range(recording.get_num_segments())]
    method: "uniform"  | "all", default: "uniform"
        The method to use. Only "uniform" is implemented for now
    max_spikes_per_unit: int, default: 500
        The number of spikes per units
    margin_size: None | int, default: None
        A margin on each border of segments to avoid border spikes
    seed: None | int, default: None
        A seed for random generator
    
    Returns
    -------
    random_spikes_indices: np.array
        Selected spike indices coresponding to the sorting spike vector.

Class: read_binary
  Docstring:
    RecordingExtractor for a binary format
    
    Parameters
    ----------
    file_paths : str or Path or list
        Path to the binary file
    sampling_frequency : float
        The sampling frequency
    num_channels : int
        Number of channels
    dtype : str or dtype
        The dtype of the binary file
    time_axis : int, default: 0
        The axis of the time dimension
    t_starts : None or list of float, default: None
        Times in seconds of the first sample for each segment
    channel_ids : list, default: None
        A list of channel ids
    file_offset : int, default: 0
        Number of bytes in the file to offset by during memmap instantiation.
    gain_to_uV : float or array-like, default: None
        The gain to apply to the traces
    offset_to_uV : float or array-like, default: None
        The offset to apply to the traces
    is_filtered : bool or None, default: None
        If True, the recording is assumed to be filtered. If None, is_filtered is not set.
    
    Notes
    -----
    When both num_channels and num_chan are provided, `num_channels` is used and `num_chan` is ignored.
    
    Returns
    -------
    recording : BinaryRecordingExtractor
        The recording Extractor
  __init__(self, file_paths, sampling_frequency, dtype, num_channels: 'int', t_starts=None, channel_ids=None, time_axis=0, file_offset=0, gain_to_uV=None, offset_to_uV=None, is_filtered=None)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary
  Method: write_recording(recording, file_paths, dtype=None, **job_kwargs)
    Docstring:
      Save the traces of a recording extractor in binary .dat format.
      
      Parameters
      ----------
      recording : RecordingExtractor
          The recording extractor object to be saved in .dat format
      file_paths : str
          The path to the file.
      dtype : dtype, default: None
          Type of the saved data
      **job_kwargs : keyword arguments for parallel processing:
          * chunk_duration or chunk_size or chunk_memory or total_memory
              - chunk_size : int
                  Number of samples per chunk
              - chunk_memory : str
                  Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
              - total_memory : str
                  Total memory usage (e.g. "500M", "2G")
              - chunk_duration : str or float or None
                  Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
          * n_jobs : int | float
              Number of jobs to use. With -1 the number of jobs is the same as number of cores.
              Using a float between 0 and 1 will use that fraction of the total cores.
          * progress_bar : bool
              If True, a progress bar is printed
          * mp_context : "fork" | "spawn" | None, default: None
              Context for multiprocessing. It can be None, "fork" or "spawn".
              Note that "fork" is only safely available on LINUX systems

Class: read_binary_folder
  Docstring:
    BinaryFolderRecording is an internal format used in spikeinterface.
    It is a BinaryRecordingExtractor + metadata contained in a folder.
    
    It is created with the function: `recording.save(format="binary", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    recording : BinaryFolderRecording
        The recording
  __init__(self, folder_path)
  Method: get_binary_description(self)
    Docstring:
      When `rec.is_binary_compatible()` is True
      this returns a dictionary describing the binary format.
  Method: is_binary_compatible(self) -> 'bool'
    Docstring:
      Checks if the recording is "binary" compatible.
      To be used before calling `rec.get_binary_description()`
      
      Returns
      -------
      bool
          True if the underlying recording is binary

Class: read_npy_snippets
  Docstring:
    Dead simple and super light format based on the NPY numpy format.
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_paths, sampling_frequency, channel_ids=None, nbefore=None, gain_to_uV=None, offset_to_uV=None)
  Method: write_snippets(snippets, file_paths, dtype=None)
    Docstring:
      Save snippet extractor in binary .npy format.
      
      Parameters
      ----------
      snippets: SnippetsExtractor
          The snippets extractor object to be saved in .npy format
      file_paths: str
          The paths to the files.
      dtype: None, str or dtype
          Typecode or data-type to which the snippets will be cast.
      {}

Class: read_npy_snippets_folder
  Docstring:
    NpyFolderSnippets is an internal format used in spikeinterface.
    It is a NpySnippetsExtractor + metadata contained in a folder.
    
    It is created with the function: `snippets.save(format="npy", folder="/myfolder")`
    
    Parameters
    ----------
    folder_path : str or Path
        The path to the folder
    
    Returns
    -------
    snippets : NpyFolderSnippets
        The snippets
  __init__(self, folder_path)

Class: read_npz_folder
  Docstring:
    NpzFolderSorting is the old internal format used in spikeinterface (<=0.98.0)
    
    This a folder that contains:
    
      * "sorting_cached.npz" file in the NpzSortingExtractor format
      * "npz.json" which the json description of NpzSortingExtractor
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="npz_folder")`
    
    Parameters
    ----------
    folder_path : str or Path
    
    Returns
    -------
    sorting : NpzFolderSorting
        The sorting
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: read_npz_sorting
  Docstring:
    Dead simple and super light format based on the NPZ numpy format.
    https://docs.scipy.org/doc/numpy/reference/generated/numpy.savez.html#numpy.savez
    
    It is in fact an archive of several .npy format.
    All spike are store in two columns maner index+labels
  __init__(self, file_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Class: read_numpy_sorting_folder
  Docstring:
    NumpyFolderSorting is the new internal format used in spikeinterface (>=0.99.0) for caching sorting objects.
    
    It is a simple folder that contains:
      * a file "spike.npy" (numpy format) with all flatten spikes (using sorting.to_spike_vector())
      * a "numpysorting_info.json" containing sampling_frequency, unit_ids and num_segments
      * a metadata folder for units properties.
    
    It is created with the function: `sorting.save(folder="/myfolder", format="numpy_folder")`
  __init__(self, folder_path)
  Method: write_sorting(sorting, save_path)
    Docstring:
      None

Function: read_python(path)
  Docstring:
    Parses python scripts in a dictionary
    
    Parameters
    ----------
    path: str or Path
        Path to file to parse
    
    Returns
    -------
    metadata:
        dictionary containing parsed file

Function: read_zarr(folder_path: 'str | Path', storage_options: 'dict | None' = None) -> 'ZarrRecordingExtractor | ZarrSortingExtractor'
  Docstring:
    Read recording or sorting from a zarr format
    
    Parameters
    ----------
    folder_path : str or Path
        Path to the zarr root file
    storage_options : dict or None
        Storage options for zarr `store`. E.g., if "s3://" or "gcs://" they can provide authentication methods, etc.
    
    Returns
    -------
    extractor : ZarrExtractor
        The loaded extractor

Function: reset_global_job_kwargs()
  Docstring:
    Reset the global job kwargs.

Function: reset_global_tmp_folder()
  Docstring:
    Generate a new global path temporary folder.

Class: select_segment_recording
  Docstring:
    Return a new recording with a subset of segments from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, recording: 'BaseRecording', segment_indices: 'int | list[int]')

Class: select_segment_sorting
  Docstring:
    Return a new sorting with a single segment from a multi-segment sorting.
    
    Parameters
    ----------
    sorting : BaseSorting
        The multi-segment sorting
    segment_indices : int | list[int]
        The segment indices to select
  __init__(self, sorting: 'BaseSorting', segment_indices: 'int | list[int]')

Function: set_global_dataset_folder(folder)
  Docstring:
    Set the global dataset folder.

Function: set_global_job_kwargs(**job_kwargs)
  Docstring:
    Set the global job kwargs.
    
    Parameters
    ----------
    
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

Function: set_global_tmp_folder(folder)
  Docstring:
    Set the global path temporary folder.

Function: snippets_from_sorting(recording, sorting, nbefore=20, nafter=44, wf_folder=None, **job_kwargs)
  Docstring:
    Extract snippets from recording and sorting instances
    
    Parameters
    ----------
    recording: BaseRecording
        The recording to get snippets from
    sorting: BaseSorting
        The sorting to get the frames from
    nbefore: int
        N samples before spike
    nafter: int
        N samples after spike
    wf_folder: None, str or path
        Folder to save npy files, if None shared_memory will be used to extract waveforms
    Returns
    -------
    snippets: NumpySnippets
        Snippets extractor created

Function: spike_vector_to_spike_trains(spike_vector: 'list[np.array]', unit_ids: 'np.array') -> 'dict[dict[str, np.array]]'
  Docstring:
    Computes all spike trains for all units/segments from a spike vector list.
    
    Internally calls numba if numba is installed.
    
    Parameters
    ----------
    spike_vector: list[np.ndarray]
        List of spike vectors optained with sorting.to_spike_vector(concatenated=False)
    unit_ids: np.array
        Unit ids
    
    Returns
    -------
    spike_trains: dict[dict]:
        A dict containing, for each segment, the spike trains of all units
        (as a dict: unit_id --> spike_train).

Function: split_job_kwargs(mixed_kwargs)
  Docstring:
    This function splits mixed kwargs into job_kwargs and specific_kwargs.
    This can be useful for some function with generic signature
    mixing specific and job kwargs.

Function: split_recording(recording: 'BaseRecording')
  Docstring:
    Return a list of mono-segment recordings from a multi-segment recording.
    
    Parameters
    ----------
    recording : BaseRecording
        The multi-segment recording
    
    Returns
    -------
    recording_list
        A list of mono-segment recordings

Class: split_sorting
  Docstring:
    Splits a sorting with a single segment to multiple segments
    based on the given list of recordings (must be in order)
    
    Parameters
    ----------
    parent_sorting : BaseSorting
        Sorting with a single segment (e.g. from sorting concatenated recording)
    recording_or_recording_list : list of recordings, ConcatenateSegmentRecording, or None, default: None
        If list of recordings, uses the lengths of those recordings to split the sorting
        into smaller segments
        If ConcatenateSegmentRecording, uses the associated list of recordings to split
        the sorting into smaller segments
        If None, looks for the recording associated with the sorting
  __init__(self, parent_sorting: 'BaseSorting', recording_or_recording_list=None)

Function: synthesize_random_firings(num_units=20, sampling_frequency=30000.0, duration=60, refractory_period_ms=4.0, firing_rates=3.0, add_shift_shuffle=False, seed=None)
  Docstring:
    "
    Generate some spiketrain with random firing for one segment.
    
    Parameters
    ----------
    num_units : int, default: 20
        Number of units.
    sampling_frequency : float, default: 30000.0
        Sampling rate in Hz.
    duration : float, default: 60
        Duration of the segment in seconds.
    refractory_period_ms : float
        Refractory period in ms.
    firing_rates : float or list[float]
        The firing rate of each unit (in Hz).
        If float, all units will have the same firing rate.
    add_shift_shuffle : bool, default: False
        Optionally add a small shuffle on half of the spikes to make the autocorrelogram less flat.
    seed : int, default: None
        Seed for the generator.
    
    Returns
    -------
    times: np.array
        Concatenated and sorted times vector.
    labels: np.array
        Concatenated and sorted label vector.

Function: synthetize_spike_train_bad_isi(duration, baseline_rate, num_violations, violation_delta=1e-05)
  Docstring:
    Create a spike train. Has uniform inter-spike intervals, except where isis violations occur.
    
    Parameters
    ----------
    duration : float
        Length of simulated recording (in seconds).
    baseline_rate : float
        Firing rate for "true" spikes.
    num_violations : int
        Number of contaminating spikes.
    violation_delta : float, default: 1e-5
        Temporal offset of contaminating spikes (in seconds).
    
    Returns
    -------
    spike_train : np.array
        Array of monotonically increasing spike times.

Function: write_binary_recording(recording: "'BaseRecording'", file_paths: 'list[Path | str] | Path | str', dtype: 'np.typing.DTypeLike' = None, add_file_extension: 'bool' = True, byte_offset: 'int' = 0, auto_cast_uint: 'bool' = True, verbose: 'bool' = False, **job_kwargs)
  Docstring:
    Save the trace of a recording extractor in several binary .dat format.
    
    Note :
        time_axis is always 0 (contrary to previous version.
        to get time_axis=1 (which is a bad idea) use `write_binary_recording_file_handle()`
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor object to be saved in .dat format
    file_path : str or list[str]
        The path to the file.
    dtype : dtype or None, default: None
        Type of the saved data
    add_file_extension, bool, default: True
        If True, and  the file path does not end in "raw", "bin", or "dat" then "raw" is added as an extension.
    byte_offset : int, default: 0
        Offset in bytes for the binary file (e.g. to write a header). This is useful in case you want to append data
        to an existing file where you wrote a header or other data before.
    auto_cast_uint : bool, default: True
        If True, unsigned integers are automatically cast to int if the specified dtype is signed
        .. deprecated:: 0.103, use the `unsigned_to_signed` function instead.
    verbose : bool
        This is the verbosity of the ChunkRecordingExecutor
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems

Function: write_python(path, dict)
  Docstring:
    Saves python dictionary to file
    
    Parameters
    ----------
    path: str or Path
        Path to save file
    dict: dict
        dictionary to save

Function: write_to_h5_dataset_format(recording, dataset_path, segment_index, save_path=None, file_handle=None, time_axis=0, single_axis=False, dtype=None, chunk_size=None, chunk_memory='500M', verbose=False, auto_cast_uint=True, return_scaled=False)
  Docstring:
    Save the traces of a recording extractor in an h5 dataset.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor object to be saved in .dat format
    dataset_path : str
        Path to dataset in the h5 file (e.g. "/dataset")
    segment_index : int
        index of segment
    save_path : str, default: None
        The path to the file.
    file_handle : file handle, default: None
        The file handle to dump data. This can be used to append data to an header. In case file_handle is given,
        the file is NOT closed after writing the binary data.
    time_axis : 0 or 1, default: 0
        If 0 then traces are transposed to ensure (nb_sample, nb_channel) in the file.
        If 1, the traces shape (nb_channel, nb_sample) is kept in the file.
    single_axis : bool, default: False
        If True, a single-channel recording is saved as a one dimensional array
    dtype : dtype, default: None
        Type of the saved data
    chunk_size : None or int, default: None
        Number of chunks to save the file in. This avoids too much memory consumption for big files.
        If None and "chunk_memory" is given, the file is saved in chunks of "chunk_memory" MB
    chunk_memory : None or str, default: "500M"
        Chunk size in bytes must end with "k", "M" or "G"
    verbose : bool, default: False
        If True, output is verbose (when chunks are used)
    auto_cast_uint : bool, default: True
        If True, unsigned integers are automatically cast to int if the specified dtype is signed
    return_scaled : bool, default: False
        If True and the recording has scaling (gain_to_uV and offset_to_uV properties),
        traces are dumped to uV

==== DELIM ====
API for module: spikeinterface.postprocessing

Class: AlignSortingExtractor
  Docstring:
    Class to shift a unit (generally to align the template on the peak) given
    the shifts for each unit.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting to align.
    unit_peak_shifts : dict
        Dictionary mapping the unit_id to the unit's shift (in number of samples).
        A positive shift means the spike train is shifted back in time, while
        a negative shift means the spike train is shifted forward.
    
    Returns
    -------
    aligned_sorting : AlignSortingExtractor
        The aligned sorting.
  __init__(self, sorting, unit_peak_shifts)

Class: ComputeAmplitudeScalings
  Docstring:
    Computes the amplitude scalings from a SortingAnalyzer.
    
    Amplitude scalings are the scaling factor to multiply the
    unit template to best match the waveform. Each waveform
    has an associated amplitude scaling.
    
    In the case where there are not spike collisions, the scaling is
    the regression of the waveform onto the template, with intercept:
        scaling * template + intercept = waveform
    
    When there are spike collisions, a different approach is taken.
    Spike collisions are sets of temporally and spatially overlapping spikes.
    Therefore, signal from other spikes can contribute to the amplitude
    of the spike of interest. To address this, a multivariate linear
    regression is used to regress the waveform (that contains multiple spikes,
    the spike of interest and colliding spikes) onto a set of templates.
    
    Parameters
    ----------
    sorting_analyzer: SortingAnalyzer
        A SortingAnalyzer object
    sparsity: ChannelSparsity or None, default: None
        If waveforms are not sparse, sparsity is required if the number of channels is greater than
        `max_dense_channels`. If the waveform extractor is sparse, its sparsity is automatically used.
    max_dense_channels: int, default: 16
        Maximum number of channels to allow running without sparsity. To compute amplitude scaling using
        dense waveforms, set this to None, sparsity to None, and pass dense waveforms as input.
    ms_before : float or None, default: None
        The cut out to apply before the spike peak to extract local waveforms.
        If None, the SortingAnalyzer ms_before is used.
    ms_after : float or None, default: None
        The cut out to apply after the spike peak to extract local waveforms.
        If None, the SortingAnalyzer ms_after is used.
    handle_collisions: bool, default: True
        Whether to handle collisions between spikes. If True, the amplitude scaling of colliding spikes
        (defined as spikes within `delta_collision_ms` ms and with overlapping sparsity) is computed by fitting a
        multi-linear regression model (with `sklearn.LinearRegression`). If False, each spike is fitted independently.
    delta_collision_ms: float, default: 2
        The maximum time difference in ms before and after a spike to gather colliding spikes.
    load_if_exists : bool, default: False
        Whether to load precomputed spike amplitudes, if they already exist.
    outputs: "concatenated" | "by_unit", default: "concatenated"
        How the output should be returned
    {}
    
    Returns
    -------
    amplitude_scalings: np.array or list of dict
        The amplitude scalings.
            - If "concatenated" all amplitudes for all spikes and all units are concatenated
            - If "by_unit", amplitudes are returned as a list (for segments) of dictionaries (for units)
  __init__(self, sorting_analyzer)

Class: ComputeCorrelograms
  Docstring:
    Compute auto and cross correlograms of unit spike times.
    
    Parameters
    ----------
    sorting_analyzer_or_sorting : SortingAnalyzer | Sorting
        A SortingAnalyzer or Sorting object
    window_ms : float, default: 50.0
        The window around the spike to compute the correlation in ms. For example,
         if 50 ms, the correlations will be computed at lags -25 ms ... 25 ms.
    bin_ms : float, default: 1.0
        The bin size in ms. This determines the bin size over which to
        combine lags. For example, with a window size of -25 ms to 25 ms, and
        bin size 1 ms, the correlation will be binned as -25 ms, -24 ms, ...
    method : "auto" | "numpy" | "numba", default: "auto"
         If "auto" and numba is installed, numba is used, otherwise numpy is used.
    
    Returns
    -------
    correlogram : np.array
        Correlograms with shape (num_units, num_units, num_bins)
        The diagonal of the correlogram (e.g. correlogram[A, A, :])
        holds the unit auto correlograms. The off-diagonal elements
        are the cross-correlograms between units, where correlogram[A, B, :]
        and correlogram[B, A, :] represent cross-correlation between
        the same pair of units, applied in opposite directions,
        correlogram[A, B, :] = correlogram[B, A, ::-1].
    bins :  np.array
        The bin edges in ms
    
    Notes
    -----
    In the extracellular electrophysiology context, a correlogram
    is a visualisation of the results of a cross-correlation
    between two spike trains. The cross-correlation slides one spike train
    along another sample-by-sample, taking the correlation at each 'lag'. This results
    in a plot with 'lag' (i.e. time offset) on the x-axis and 'correlation'
    (i.e. how similar to two spike trains are) on the y-axis. In this
    implementation, the y-axis result is the 'counts' of spike matches per
    time bin (rather than a computer correlation or covariance).
    
    In the present implementation, a 'window' around spikes is first
    specified. For example, if a window of 100 ms is taken, we will
    take the correlation at lags from -50 ms to +50 ms around the spike peak.
    In theory, we can have as many lags as we have samples. Often, this
    visualisation is too high resolution and instead the lags are binned
    (e.g. -50 to -45 ms, ..., -5 to 0 ms, 0 to 5 ms, ...., 45 to 50 ms).
    When using counts as output, binning the lags involves adding up all counts across
    a range of lags.
  __init__(self, sorting_analyzer)

Class: ComputeISIHistograms
  Docstring:
    Compute ISI histograms.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    window_ms : float, default: 50
        The window in ms
    bin_ms : float, default: 1
        The bin size in ms
    method : "auto" | "numpy" | "numba", default: "auto"
        . If "auto" and numba is installed, numba is used, otherwise numpy is used
    
    Returns
    -------
    isi_histograms : np.array
        IDI_histograms with shape (num_units, num_bins)
    bins :  np.array
        The bin edges in ms
  __init__(self, sorting_analyzer)

Class: ComputePrincipalComponents
  Docstring:
    Compute PC scores from waveform extractor. The PCA projections are pre-computed only
    on the sampled waveforms available from the extensions "waveforms".
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    n_components : int, default: 5
        Number of components fo PCA
    mode : "by_channel_local" | "by_channel_global" | "concatenated", default: "by_channel_local"
        The PCA mode:
            - "by_channel_local": a local PCA is fitted for each channel (projection by channel)
            - "by_channel_global": a global PCA is fitted for all channels (projection by channel)
            - "concatenated": channels are concatenated and a global PCA is fitted
    sparsity : ChannelSparsity or None, default: None
        The sparsity to apply to waveforms.
        If sorting_analyzer is already sparse, the default sparsity will be used
    whiten : bool, default: True
        If True, waveforms are pre-whitened
    dtype : dtype, default: "float32"
        Dtype of the pc scores
    {}
    
    Examples
    --------
    >>> sorting_analyzer = create_sorting_analyzer(sorting, recording)
    >>> sorting_analyzer.compute("principal_components", n_components=3, mode='by_channel_local')
    >>> ext_pca = sorting_analyzer.get_extension("principal_components")
    >>> # get pre-computed projections for unit_id=1
    >>> unit_projections = ext_pca.get_projections_one_unit(unit_id=1, sparse=False)
    >>> # get pre-computed projections for some units on some channels
    >>> some_projections, spike_unit_indices = ext_pca.get_some_projections(channel_ids=None, unit_ids=None)
    >>> # retrieve fitted pca model(s)
    >>> pca_model = ext_pca.get_pca_model()
    >>> # compute projections on new waveforms
    >>> proj_new = ext_pca.project_new(new_waveforms)
    >>> # run for all spikes in the SortingExtractor
    >>> pc.run_for_all_spikes(file_path="all_pca_projections.npy")
  __init__(self, sorting_analyzer)
  Method: get_pca_model(self)
    Docstring:
      Returns the scikit-learn PCA model objects.
      
      Returns
      -------
      pca_models: PCA object(s)
          * if mode is "by_channel_local", "pca_model" is a list of PCA model by channel
          * if mode is "by_channel_global" or "concatenated", "pca_model" is a single PCA model
  Method: get_projections_one_unit(self, unit_id, sparse=False)
    Docstring:
      Returns the computed projections for the sampled waveforms of a unit id.
      
      Parameters
      ----------
      unit_id : int or str
          The unit id to return PCA projections for
      sparse: bool, default: False
          If True, and SortingAnalyzer must be sparse then only projections on sparse channels are returned.
          Channel indices are also returned.
      
      Returns
      -------
      projections: np.array
          The PCA projections (num_waveforms, num_components, num_channels).
          In case sparsity is used, only the projections on sparse channels are returned.
      channel_indices: np.array
  Method: get_some_projections(self, channel_ids=None, unit_ids=None)
    Docstring:
      Returns the computed projections for the sampled waveforms of some units and some channels.
      
      When internally sparse, this function realign  projection on given channel_ids set.
      
      Parameters
      ----------
      channel_ids : list, default: None
          List of channel ids on which projections must aligned
      unit_ids : list, default: None
          List of unit ids to return projections for
      
      Returns
      -------
      some_projections: np.array
          The PCA projections (num_spikes, num_components, num_sparse_channels)
      spike_unit_indices: np.array
          Array a copy of with some_spikes["unit_index"] of returned PCA projections of shape (num_spikes, )
  Method: project_new(self, new_spikes, new_waveforms, progress_bar=True)
    Docstring:
      Projects new waveforms or traces snippets on the PC components.
      
      Parameters
      ----------
      new_spikes: np.array
          The spikes vector associated to the waveforms buffer. This is need need to get the sparsity spike per spike.
      new_waveforms: np.array
          Array with new waveforms to project with shape (num_waveforms, num_samples, num_channels)
      
      Returns
      -------
      new_projections: np.array
          Projections of new waveforms on PCA compoents
  Method: run_for_all_spikes(self, file_path=None, verbose=False, **job_kwargs)
    Docstring:
      Project all spikes from the sorting on the PCA model.
      This is a long computation because waveform need to be extracted from each spikes.
      
      Used mainly for `export_to_phy()`
      
      PCs are exported to a .npy single file.
      
      Parameters
      ----------
      file_path : str or Path or None
          Path to npy file that will store the PCA projections.
      {}

Class: ComputeSpikeAmplitudes
  Docstring:
    AnalyzerExtension
    Computes the spike amplitudes.
    
    Needs "templates" to be computed first.
    Localize spikes in 2D or 3D with several methods given the template.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    ms_before : float, default: 0.5
        The left window, before a peak, in milliseconds
    ms_after : float, default: 0.5
        The right window, after a peak, in milliseconds
    spike_retriver_kwargs : dict
        A dictionary to control the behavior for getting the maximum channel for each spike
        This dictionary contains:
          * channel_from_template: bool, default: True
              For each spike is the maximum channel computed from template or re estimated at every spikes
              channel_from_template = True is old behavior but less acurate
              channel_from_template = False is slower but more accurate
          * radius_um: float, default: 50
              In case channel_from_template=False, this is the radius to get the true peak
          * peak_sign, default: "neg"
              In case channel_from_template=False, this is the peak sign.
    method : "center_of_mass" | "monopolar_triangulation" | "grid_convolution", default: "center_of_mass"
        The localization method to use
    **method_kwargs : dict, default: {}
        Kwargs which are passed to the method function. These can be found in the docstrings of `compute_center_of_mass`, `compute_grid_convolution` and `compute_monopolar_triangulation`.
    outputs : "numpy" | "by_unit", default: "numpy"
        The output format, either concatenated as numpy array or separated on a per unit basis
    
    Returns
    -------
    spike_locations: np.array
        All locations for all spikes and all units are concatenated
  __init__(self, sorting_analyzer)

Class: ComputeSpikeLocations
  Docstring:
    Localize spikes in 2D or 3D with several methods given the template.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    ms_before : float, default: 0.5
        The left window, before a peak, in milliseconds
    ms_after : float, default: 0.5
        The right window, after a peak, in milliseconds
    spike_retriver_kwargs : dict
        A dictionary to control the behavior for getting the maximum channel for each spike
        This dictionary contains:
    
          * channel_from_template: bool, default: True
              For each spike is the maximum channel computed from template or re estimated at every spikes
              channel_from_template = True is old behavior but less acurate
              channel_from_template = False is slower but more accurate
          * radius_um: float, default: 50
              In case channel_from_template=False, this is the radius to get the true peak
          * peak_sign, default: "neg"
              In case channel_from_template=False, this is the peak sign.
    method : "center_of_mass" | "monopolar_triangulation" | "grid_convolution", default: "center_of_mass"
        The localization method to use
    method_kwargs : dict, default: dict()
        Other kwargs depending on the method.
    outputs : "concatenated" | "by_unit", default: "concatenated"
        The output format
    {}
    
    Returns
    -------
    spike_locations: np.array
        All locations for all spikes
  __init__(self, sorting_analyzer)

Class: ComputeTemplateMetrics
  Docstring:
    Compute template metrics including:
        * peak_to_valley
        * peak_trough_ratio
        * halfwidth
        * repolarization_slope
        * recovery_slope
        * num_positive_peaks
        * num_negative_peaks
    
    Optionally, the following multi-channel metrics can be computed (when include_multi_channel_metrics=True):
        * velocity_above
        * velocity_below
        * exp_decay
        * spread
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object
    metric_names : list or None, default: None
        List of metrics to compute (see si.postprocessing.get_template_metric_names())
    peak_sign : {"neg", "pos"}, default: "neg"
        Whether to use the positive ("pos") or negative ("neg") peaks to estimate extremum channels.
    upsampling_factor : int, default: 10
        The upsampling factor to upsample the templates
    sparsity : ChannelSparsity or None, default: None
        If None, template metrics are computed on the extremum channel only.
        If sparsity is given, template metrics are computed on all sparse channels of each unit.
        For more on generating a ChannelSparsity, see the `~spikeinterface.compute_sparsity()` function.
    include_multi_channel_metrics : bool, default: False
        Whether to compute multi-channel metrics
    delete_existing_metrics : bool, default: False
        If True, any template metrics attached to the `sorting_analyzer` are deleted. If False, any metrics which were previously calculated but are not included in `metric_names` are kept, provided the `metric_params` are unchanged.
    metric_params : dict of dicts or None, default: None
        Dictionary with parameters for template metrics calculation.
        Default parameters can be obtained with: `si.postprocessing.template_metrics.get_default_tm_params()`
    
    Returns
    -------
    template_metrics : pd.DataFrame
        Dataframe with the computed template metrics.
        If "sparsity" is None, the index is the unit_id.
        If "sparsity" is given, the index is a multi-index (unit_id, channel_id)
    
    Notes
    -----
    If any multi-channel metric is in the metric_names or include_multi_channel_metrics is True, sparsity must be None,
    so that one metric value will be computed per unit.
    For multi-channel metrics, 3D channel locations are not supported. By default, the depth direction is "y".
  __init__(self, sorting_analyzer)

Class: ComputeTemplateSimilarity
  Docstring:
    Compute similarity between templates with several methods.
    
    Similarity is defined as 1 - distance(T_1, T_2) for two templates T_1, T_2
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object
    method : "cosine" | "l1" | "l2", default: "cosine"
        The method to compute the similarity.
        In case of "l1" or "l2", the formula used is:
        - similarity = 1 - norm(T_1 - T_2)/(norm(T_1) + norm(T_2)).
        In case of cosine it is:
        - similarity = 1 - sum(T_1.T_2)/(norm(T_1)norm(T_2)).
    max_lag_ms : float, default: 0
        If specified, the best distance for all given lag within max_lag_ms is kept, for every template
    support : "dense" | "union" | "intersection", default: "union"
        Support that should be considered to compute the distances between the templates, given their sparsities.
        Can be either ["dense", "union", "intersection"]
    
    Returns
    -------
    similarity: np.array
        The similarity matrix
  __init__(self, sorting_analyzer)

Class: ComputeUnitLocations
  Docstring:
    Localize units in 2D or 3D with several methods given the template.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    method : "monopolar_triangulation" |  "center_of_mass" | "grid_convolution", default: "monopolar_triangulation"
        The method to use for localization
    **method_kwargs : dict, default: {}
        Kwargs which are passed to the method function. These can be found in the docstrings of `compute_center_of_mass`, `compute_grid_convolution` and `compute_monopolar_triangulation`.
    
    Returns
    -------
    unit_locations : np.array
        unit location with shape (num_unit, 2) or (num_unit, 3) or (num_unit, 3) (with alpha)
  __init__(self, sorting_analyzer)
  Method: get_data(self, outputs='numpy')
    Docstring:
      None

Class: align_sorting
  Docstring:
    Class to shift a unit (generally to align the template on the peak) given
    the shifts for each unit.
    
    Parameters
    ----------
    sorting : BaseSorting
        The sorting to align.
    unit_peak_shifts : dict
        Dictionary mapping the unit_id to the unit's shift (in number of samples).
        A positive shift means the spike train is shifted back in time, while
        a negative shift means the spike train is shifted forward.
    
    Returns
    -------
    aligned_sorting : AlignSortingExtractor
        The aligned sorting.
  __init__(self, sorting, unit_peak_shifts)

Function: check_equal_template_with_distribution_overlap(waveforms0, waveforms1, template0=None, template1=None, num_shift=2, quantile_limit=0.8, return_shift=False)
  Docstring:
    Given 2 waveforms sets, check if they come from the same distribution.
    
    This is computed with a simple trick:
    It project all waveforms from each cluster on the normed vector going from
    one template to another, if the cluster are well separate enought we should
    have one distribution around 0 and one distribution around .
    If the distribution overlap too much then then come from the same distribution.
    
    Done by samuel Garcia with an idea of Crhistophe Pouzat.
    This is used internally by tridesclous for auto merge step.
    
    Can be also used as a distance metrics between 2 clusters.
    
    waveforms0 and waveforms1 have to be spasifyed outside this function.
    
    This is done with a combinaison of shift bewteen the 2 cluster to also check
    if cluster are similar with a sample shift.
    
    Parameters
    ----------
    waveforms0, waveforms1: numpy array
        Shape (num_spikes, num_samples, num_chans)
        num_spikes are not necessarly the same for custer.
    template0 , template1=None or numpy array
        The average of each cluster.
        If None, then computed.
    num_shift: int default: 2
        number of shift on each side to perform.
    quantile_limit: float in [0 1]
        The quantile overlap limit.
    
    Returns
    -------
    equal: bool
        equal or not

Function: compute_correlograms(sorting_analyzer_or_sorting, window_ms: 'float' = 50.0, bin_ms: 'float' = 1.0, method: 'str' = 'auto')
  Docstring:
    Compute auto and cross correlograms of unit spike times.
    
    Parameters
    ----------
    sorting_analyzer_or_sorting : SortingAnalyzer | Sorting
        A SortingAnalyzer or Sorting object
    window_ms : float, default: 50.0
        The window around the spike to compute the correlation in ms. For example,
         if 50 ms, the correlations will be computed at lags -25 ms ... 25 ms.
    bin_ms : float, default: 1.0
        The bin size in ms. This determines the bin size over which to
        combine lags. For example, with a window size of -25 ms to 25 ms, and
        bin size 1 ms, the correlation will be binned as -25 ms, -24 ms, ...
    method : "auto" | "numpy" | "numba", default: "auto"
         If "auto" and numba is installed, numba is used, otherwise numpy is used.
    
    Returns
    -------
    correlogram : np.array
        Correlograms with shape (num_units, num_units, num_bins)
        The diagonal of the correlogram (e.g. correlogram[A, A, :])
        holds the unit auto correlograms. The off-diagonal elements
        are the cross-correlograms between units, where correlogram[A, B, :]
        and correlogram[B, A, :] represent cross-correlation between
        the same pair of units, applied in opposite directions,
        correlogram[A, B, :] = correlogram[B, A, ::-1].
    bins :  np.array
        The bin edges in ms
    
    Notes
    -----
    In the extracellular electrophysiology context, a correlogram
    is a visualisation of the results of a cross-correlation
    between two spike trains. The cross-correlation slides one spike train
    along another sample-by-sample, taking the correlation at each 'lag'. This results
    in a plot with 'lag' (i.e. time offset) on the x-axis and 'correlation'
    (i.e. how similar to two spike trains are) on the y-axis. In this
    implementation, the y-axis result is the 'counts' of spike matches per
    time bin (rather than a computer correlation or covariance).
    
    In the present implementation, a 'window' around spikes is first
    specified. For example, if a window of 100 ms is taken, we will
    take the correlation at lags from -50 ms to +50 ms around the spike peak.
    In theory, we can have as many lags as we have samples. Often, this
    visualisation is too high resolution and instead the lags are binned
    (e.g. -50 to -45 ms, ..., -5 to 0 ms, 0 to 5 ms, ...., 45 to 50 ms).
    When using counts as output, binning the lags involves adding up all counts across
    a range of lags.

Function: compute_isi_histograms_numba(sorting, window_ms: 'float' = 50.0, bin_ms: 'float' = 1.0)
  Docstring:
    Computes the Inter-Spike Intervals histogram for all
    the units inside the given sorting.
    
    This is a "brute force" method using compiled code (numba)
    to accelerate the computation.
    
    Implementation: Aurélien Wyngaard

Function: compute_isi_histograms_numpy(sorting, window_ms: 'float' = 50.0, bin_ms: 'float' = 1.0)
  Docstring:
    Computes the Inter-Spike Intervals histogram for all
    the units inside the given sorting.
    
    This is a very standard numpy implementation, nothing fancy.
    
    Implementation: Aurélien Wyngaard

Function: compute_template_similarity_by_pair(sorting_analyzer_1, sorting_analyzer_2, method='cosine', support='union', num_shifts=0)
  Docstring:
    None

Function: correlogram_for_one_segment(spike_times, spike_unit_indices, window_size, bin_size)
  Docstring:
    A very well optimized algorithm for the cross-correlation of
    spike trains, copied from the Phy package, written by Cyrille Rossant.
    
    Parameters
    ----------
    spike_times : np.ndarray
        An array of spike times (in samples, not seconds).
        This contains spikes from all units.
    spike_unit_indices : np.ndarray
        An array of labels indicating the unit of the corresponding
        spike in `spike_times`.
    window_size : int
        The window size over which to perform the cross-correlation, in samples
    bin_size : int
        The size of which to bin lags, in samples.
    
    Returns
    -------
    correlograms : np.array
        A (num_units, num_units, num_bins) array of correlograms
        between all units at each lag time bin.
    
    Notes
    -----
    For all spikes, time difference between this spike and
    every other spike within the window is directly computed
    and stored as a count in the relevant lag time bin.
    
    Initially, the spike_times array is shifted by 1 position, and the difference
    computed. This gives the time differences between the closest spikes
    (skipping the zero-lag case). Next, the differences between
    spikes times in samples are converted into units relative to
    bin_size ('binarized'). Spikes in which the binarized difference to
    their closest neighbouring spike is greater than half the bin-size are
    masked.
    
    Finally, the indices of the (num_units, num_units, num_bins) correlogram
    that need incrementing are done so with `ravel_multi_index()`. This repeats
    for all shifts along the spike_train until no spikes have a corresponding
    match within the window size.

Function: get_template_metric_names()
  Docstring:
    None

==== DELIM ====
API for module: spikeinterface.preprocessing

Class: AlignSnippets
  Docstring:
    Abstract class representing several multichannel snippets.
  __init__(self, snippets, new_nbefore, new_nafter, mode='main_peak', interpolate=1, det_sign=0)

Class: AstypeRecording
  Docstring:
    The spikeinterface analog of numpy.astype
    
    Converts a recording to another dtype on the fly.
    
    For recording with an unsigned dtype, please use the `unsigned_to_signed` preprocessing function.
    
    Parameters
    ----------
    dtype : None | str | dtype, default: None
        dtype of the output recording. If None, takes dtype from input `recording`.
    recording : Recording
        The recording extractor to be converted.
    round : Bool | None, default: None
        If True, will round the values to the nearest integer using `numpy.round`.
        If None and dtype is an integer, will round floats to nearest integer.
    
    Returns
    -------
    astype_recording : AstypeRecording
        The converted recording extractor object
  __init__(self, recording, dtype=None, round: 'bool | None' = None)

Class: AverageAcrossDirectionRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, parent_recording: 'BaseRecording', direction: 'str' = 'y', dtype='float32')

Class: BandpassFilterRecording
  Docstring:
    Bandpass filter of a recording
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be re-referenced
    freq_min : float
        The highpass cutoff frequency in Hz
    freq_max : float
        The lowpass cutoff frequency in Hz
    margin_ms : float
        Margin in ms on border to avoid border effect
    dtype : dtype or None
        The dtype of the returned traces. If None, the dtype of the parent recording is used
    {}
    
    Returns
    -------
    filter_recording : BandpassFilterRecording
        The bandpass-filtered recording extractor object
  __init__(self, recording, freq_min=300.0, freq_max=6000.0, margin_ms=5.0, dtype=None, **filter_kwargs)

Class: BlankSaturationRecording
  Docstring:
    Find and remove parts of the signal with extereme values. Some arrays
    may produce these when amplifiers enter saturation, typically for
    short periods of time. To remove these artefacts, values below or above
    a threshold are set to the median signal value.
    The threshold is either be estimated automatically, using the lower and upper
    0.1 signal percentile with the largest deviation from the median, or specificed.
    Use this function with caution, as it may clip uncontaminated signals. A warning is
    printed if the data range suggests no artefacts.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
        Minimum value. If `None`, clipping is not performed on lower
        interval edge.
    abs_threshold : float or None, default: None
        The absolute value for considering that the signal is saturating
    quantile_threshold : float or None, default: None
        Tha value in [0, 1] used if abs_threshold is None to automatically set the
        abs_threshold given the data. Must be provided if abs_threshold is None
    direction : "upper" | "lower" | "both", default: "upper"
        Only values higher than the detection threshold are set to fill_value ("higher"),
        or only values lower than the detection threshold ("lower"), or both ("both")
    fill_value : float or None, default: None
        The value to write instead of the saturating signal. If None, then the value is
        automatically computed as the median signal value
    num_chunks_per_segment : int, default: 50
        The number of chunks per segments to consider to estimate the threshold/fill_values
    chunk_size : int, default: 500
        The chunk size to estimate the threshold/fill_values
    seed : int, default: 0
        The seed to select the random chunks
    
    Returns
    -------
    rescaled_traces : BlankSaturationRecording
        The filtered traces recording extractor object
  __init__(self, recording, abs_threshold=None, quantile_threshold=None, direction='upper', fill_value=None, num_chunks_per_segment=50, chunk_size=500, seed=0)

Class: CenterRecording
  Docstring:
    Centers traces from the given recording extractor by removing the median/mean of each channel.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be centered
    mode : "median" | "mean", default: "median"
        The method used to center the traces
    dtype : str or np.dtype, default: "float32"
        The dtype of the output traces
    **random_chunk_kwargs : Keyword arguments for `spikeinterface.core.get_random_data_chunk()` function
    
    Returns
    -------
    centered_traces : ScaleRecording
        The centered traces recording extractor object
  __init__(self, recording, mode='median', dtype='float32', **random_chunk_kwargs)

Class: ClipRecording
  Docstring:
    Limit the values of the data between a_min and a_max. Values exceeding the
    range will be set to the minimum or maximum, respectively.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
    a_min : float or None, default: None
        Minimum value. If `None`, clipping is not performed on lower
        interval edge.
    a_max : float or None, default: None
        Maximum value. If `None`, clipping is not performed on upper
        interval edge.
    
    Returns
    -------
    rescaled_traces : ClipTracesRecording
        The clipped traces recording extractor object
  __init__(self, recording, a_min=None, a_max=None)

Class: CommonReferenceRecording
  Docstring:
    Re-references the recording extractor traces. That is, the value of the traces are
    shifted so the there is a new zero (reference).
    
    The new reference can be estimated either by using a common median reference (CMR) or
    a common average reference (CAR).
    
    The new reference can be set three ways:
         * "global": the median/average of all channels is set as the new reference.
            In this case, the 'global' median/average is subtracted from all channels.
         * "single": In the simplest case, a single channel from the recording is set as the new reference.
            This channel is subtracted from all other channels. To use this option, the `ref_channel_ids` argument
            is used with a single channel id. Note that this option will zero out the reference channel.
            A collection of channels can also be used as the new reference. In this case, the median/average of the
            selected channels is subtracted from all other channels. To use this option, pass the group of channels as
            a list in `ref_channel_ids`.
         * "local": the median/average within an annulus is set as the new reference.
            The parameters of the annulus are specified using the `local_radius` argument. With this option, both
            channels which are too close and channels which are too far are excluded from the median/average. Note
            that setting the `local_radius` to (0, exclude_radius)  will yield a simple circular local region.
    
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be re-referenced
    reference : "global" | "single" | "local", default: "global"
        If "global" the reference is the average or median across all the channels. To select a subset of channels,
        you can use the `ref_channel_ids` parameter.
        If "single", the reference is a single channel or a list of channels that need to be set with the `ref_channel_ids`.
        If "local", the reference is the set of channels within an annulus that must be set with the `local_radius` parameter.
    operator : "median" | "average", default: "median"
        If "median", a common median reference (CMR) is implemented (the median of
            the selected channels is removed for each timestamp).
        If "average", common average reference (CAR) is implemented (the mean of the selected channels is removed
            for each timestamp).
    groups : list or None, default: None
        List of lists containing the channel ids for splitting the reference. The CMR, CAR, or referencing with respect to
        single channels are applied group-wise. However, this is not applied for the local CAR.
        It is useful when dealing with different channel groups, e.g. multiple tetrodes.
    ref_channel_ids : list | str | int | None, default: None
        If "global" reference, a list of channels to be used as reference.
        If "single" reference, a list of one channel or a single channel id is expected.
        If "groups" is provided, then a list of channels to be applied to each group is expected.
    local_radius : tuple(int, int), default: (30, 55)
        Use in the local CAR implementation as the selecting annulus with the following format:
    
        `(exclude radius, include radius)`
    
        Where the exlude radius is the inner radius of the annulus and the include radius is the outer radius of the
        annulus. The exclude radius is used to exclude channels that are too close to the reference channel and the
        include radius delineates the outer boundary of the annulus whose role is to exclude channels
        that are too far away.
    
    dtype : None or dtype, default: None
        If None the parent dtype is kept.
    
    Returns
    -------
    referenced_recording : CommonReferenceRecording
        The re-referenced recording extractor object
  __init__(self, recording: 'BaseRecording', reference: "Literal['global', 'single', 'local']" = 'global', operator: "Literal['median', 'average']" = 'median', groups: 'list | None' = None, ref_channel_ids: 'list | str | int | None' = None, local_radius: 'tuple[float, float]' = (30.0, 55.0), dtype: 'str | np.dtype | None' = None)

Class: DecimateRecording
  Docstring:
    Decimate the recording extractor traces using array slicing
    
    Important: This uses simple array slicing for decimation rather than eg scipy.decimate.
    This might introduce aliasing, or skip across signal of interest.
    Consider  spikeinterface.preprocessing.ResampleRecording for safe resampling.
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be decimated. Each segment is decimated independently.
    decimation_factor : int
        Step between successive frames sampled from the parent recording.
        The same decimation factor is applied to all segments from the parent recording.
    decimation_offset : int, default: 0
        Index of first frame sampled from the parent recording.
        Expecting `decimation_offset` < `decimation_factor`, and `decimation_offset` < parent_recording.get_num_samples()
        to ensure that the decimated recording has at least one frame. Consider combining DecimateRecording
        with FrameSliceRecording for fine control on the recording start and end frames.
        The same decimation offset is applied to all segments from the parent recording.
    
    Returns
    -------
    decimate_recording: DecimateRecording
        The decimated recording extractor object. The full traces of the child recording segment
        correspond to the traces of the parent segment as follows:
            ```<decimated_traces> = <parent_traces>[<decimation_offset>::<decimation_factor>]```
  __init__(self, recording, decimation_factor, decimation_offset=0)

Class: DeepInterpolatedRecording
  Docstring:
    DeepInterpolatedRecording is a wrapper around a recording extractor that allows to apply a deepinterpolation model.
    
    For more information, see:
    Lecoq et al. (2021) Removing independent noise in systems neuroscience data using DeepInterpolation.
    Nature Methods. 18: 1401-1408. doi: 10.1038/s41592-021-01285-2.
    
    Parts of this code is adapted from https://github.com/AllenInstitute/deepinterpolation.
    
    Parameters
    ----------
    recording: BaseRecording
        The recording extractor to be deepinteprolated
    model_path: str
        Path to the deepinterpolation h5 model
    pre_frame: int
        Number of frames before the frame to be predicted
    post_frame: int
        Number of frames after the frame to be predicted
    pre_post_omission: int
        Number of frames to be omitted before and after the frame to be predicted
    batch_size: int
        Batch size to be used for the prediction
    predict_workers: int
        Number of workers to be used for the tensorflow `predict` function.
        Multiple workers can be used to speed up the prediction by pre-fetching the data.
    use_gpu: bool
        If True, the gpu will be used for the prediction
    disable_tf_logger: bool
        If True, the tensorflow logger will be disabled
    memory_gpu: int
        The amount of memory to be used by the gpu
    
    Returns
    -------
    recording: DeepInterpolatedRecording
        The deepinterpolated recording extractor object
  __init__(self, recording, model_path: 'str', pre_frame: 'int' = 30, post_frame: 'int' = 30, pre_post_omission: 'int' = 1, batch_size: 'int' = 128, use_gpu: 'bool' = True, predict_workers: 'int' = 1, disable_tf_logger: 'bool' = True, memory_gpu: 'Optional[int]' = None)

Class: DepthOrderRecording
  Docstring:
    Re-orders the recording (channel IDs, channel locations, and traces)
    
    Sorts channels lexicographically according to the dimensions in
    `dimensions`. See the documentation for `order_channels_by_depth`.
    
    Parameters
    ----------
    parent_recording : BaseRecording
        The recording to re-order.
    channel_ids : list/array or None
        If given, a subset of channels to order locations for
    dimensions : str or tuple, list, default: ("x", "y")
        If str, it needs to be "x", "y", "z".
        If tuple or list, it sorts the locations in two dimensions using lexsort.
        This approach is recommended since there is less ambiguity
    flip : bool, default: False
        If flip is False then the order is bottom first (starting from tip of the probe).
        If flip is True then the order is upper first.
  __init__(self, parent_recording, channel_ids=None, dimensions=('x', 'y'), flip=False)

Class: DirectionalDerivativeRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, recording: 'BaseRecording', direction: 'str' = 'y', order: 'int' = 1, edge_order: 'int' = 1, dtype='float32')

Class: FilterRecording
  Docstring:
    A generic filter class based on:
        For filter coefficient generation:
            * scipy.signal.iirfilter
        For filter application:
            * scipy.signal.filtfilt or scipy.signal.sosfiltfilt when direction = "forward-backward"
            * scipy.signal.lfilter or scipy.signal.sosfilt when direction = "forward" or "backward"
    
    BandpassFilterRecording is built on top of it.
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be re-referenced
    band : float or list, default: [300.0, 6000.0]
        If float, cutoff frequency in Hz for "highpass" filter type
        If list. band (low, high) in Hz for "bandpass" filter type
    btype : "bandpass" | "highpass", default: "bandpass"
        Type of the filter
    margin_ms : float, default: 5.0
        Margin in ms on border to avoid border effect
    coeff : array | None, default: None
        Filter coefficients in the filter_mode form.
    dtype : dtype or None, default: None
        The dtype of the returned traces. If None, the dtype of the parent recording is used
    add_reflect_padding : Bool, default False
        If True, uses a left and right margin during calculation.
    filter_order : order
        The order of the filter for `scipy.signal.iirfilter`
    filter_mode :  "sos" | "ba", default: "sos"
        Filter form of the filter coefficients for `scipy.signal.iirfilter`:
        - second-order sections ("sos")
        - numerator/denominator : ("ba")
    ftype : str, default: "butter"
        Filter type for `scipy.signal.iirfilter` e.g. "butter", "cheby1".
    direction : "forward" | "backward" | "forward-backward", default: "forward-backward"
        Direction of filtering:
        - "forward" - filter is applied to the timeseries in one direction, creating phase shifts
        - "backward" - the timeseries is reversed, the filter is applied and filtered timeseries reversed again. Creates phase shifts in the opposite direction to "forward"
        - "forward-backward" - Applies the filter in the forward and backward direction, resulting in zero-phase filtering. Note this doubles the effective filter order.
    
    Returns
    -------
    filter_recording : FilterRecording
        The filtered recording extractor object
  __init__(self, recording, band=[300.0, 6000.0], btype='bandpass', filter_order=5, ftype='butter', filter_mode='sos', margin_ms=5.0, add_reflect_padding=False, coeff=None, dtype=None, direction='forward-backward')

Class: GaussianFilterRecording
  Docstring:
    Class for performing a gaussian filtering/smoothing on a recording.
    
    This is done by a convolution with a Gaussian kernel, which acts as a lowpass-filter.
    A highpass-filter can be computed by subtracting the result of the convolution to
    the original signal.
    A bandpass-filter is obtained by substracting the signal smoothed with a narrow
    gaussian to the signal smoothed with a wider gaussian.
    
    Here, convolution is performed in the Fourier domain to accelerate the computation.
    
    Parameters
    ----------
    recording : BaseRecording
        The recording extractor to be filtered.
    freq_min : float or None
        The lower frequency cutoff for the bandpass filter.
        If None, the resulting object is a lowpass filter.
    freq_max : float or None
        The higher frequency cutoff for the bandpass filter.
        If None, the resulting object is a highpass filter.
    margin_sd : float, default: 5.0
        The number of standard deviation to take for margins.
    
    Returns
    -------
    gaussian_filtered_recording : GaussianFilterRecording
        The filtered recording extractor object.
  __init__(self, recording: 'BaseRecording', freq_min: 'float' = 300.0, freq_max: 'float' = 5000.0, margin_sd: 'float' = 5.0)

Class: HighpassFilterRecording
  Docstring:
    Highpass filter of a recording
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be re-referenced
    freq_min : float
        The highpass cutoff frequency in Hz
    margin_ms : float
        Margin in ms on border to avoid border effect
    dtype : dtype or None
        The dtype of the returned traces. If None, the dtype of the parent recording is used
    {}
    
    Returns
    -------
    filter_recording : HighpassFilterRecording
        The highpass-filtered recording extractor object
  __init__(self, recording, freq_min=300.0, margin_ms=5.0, dtype=None, **filter_kwargs)

Class: HighpassSpatialFilterRecording
  Docstring:
    Perform destriping with high-pass spatial filtering. Uses
    the kfilt() function of the International Brain Laboratory.
    
    Median average filtering, by removing the median of signal across
    channels, assumes noise is constant across all channels. However,
    noise have exhibit low-frequency changes across nearby channels.
    
    Alternative to median filtering across channels, in which the cut-band is
    extended from 0 to the 0.01 Nyquist corner frequency using butterworth filter.
    This allows removal of contaminating stripes that are not constant across channels.
    
    Performs filtering on the 0 axis (across channels), with optional
    padding (mirrored) and tapering (cosine taper) prior to filtering.
    Applies a butterworth filter on the 0-axis with tapering / padding.
    
    Parameters
    ----------
    recording : BaseRecording
        The parent recording
    n_channel_pad : int, default: 60
        Number of channels to pad prior to filtering.
        Channels are padded with mirroring.
        If None, no padding is applied
    n_channel_taper : int, default: 0
        Number of channels to perform cosine tapering on
        prior to filtering. If None and n_channel_pad is set,
        n_channel_taper will be set to the number of padded channels.
        Otherwise, the passed value will be used
    direction : "x" | "y" | "z", default: "y"
        The direction in which the spatial filter is applied
    apply_agc : bool, default: True
        It True, Automatic Gain Control is applied
    agc_window_length_s : float, default: 0.1
        Window in seconds to compute Hanning window for AGC
    highpass_butter_order : int, default: 3
        Order of spatial butterworth filter
    highpass_butter_wn : float, default: 0.01
        Critical frequency (with respect to Nyquist) of spatial butterworth filter
    dtype : dtype, default: None
        The dtype of the output traces. If None, the dtype is the same as the input traces
    
    Returns
    -------
    highpass_recording : HighpassSpatialFilterRecording
        The recording with highpass spatial filtered traces
    
    References
    ----------
    Details of the high-pass spatial filter function (written by Olivier Winter)
    used in the IBL pipeline can be found at:
    International Brain Laboratory et al. (2022). Spike sorting pipeline for the International Brain Laboratory.
    https://www.internationalbrainlab.com/repro-ephys
  __init__(self, recording, n_channel_pad=60, n_channel_taper=0, direction='y', apply_agc=True, agc_window_length_s=0.1, highpass_butter_order=3, highpass_butter_wn=0.01, dtype=None)

Class: InterpolateBadChannelsRecording
  Docstring:
    Interpolate the channel labeled as bad channels using linear interpolation.
    This is based on the distance (Gaussian kernel) from the bad channel,
    as determined from x,y channel coordinates.
    
    Details of the interpolation function (written by Olivier Winter) used in the IBL pipeline
    can be found at:
    
    International Brain Laboratory et al. (2022). Spike sorting pipeline for the
    International Brain Laboratory. https://www.internationalbrainlab.com/repro-ephys
    
    Parameters
    ----------
    recording : BaseRecording
        The parent recording
    bad_channel_ids : list or 1d np.array
        Channel ids of the bad channels to interpolate.
    sigma_um : float or None, default: None
        Distance between sequential channels in um. If None, will use
        the most common distance between y-axis channels
    p : float, default: 1.3
        Exponent of the Gaussian kernel. Determines rate of decay
        for distance weightings
    weights : np.array or None, default: None
        The weights to give to bad_channel_ids at interpolation.
        If None, weights are automatically computed
    
    Returns
    -------
    interpolated_recording : InterpolateBadChannelsRecording
        The recording object with interpolated bad channels
  __init__(self, recording, bad_channel_ids, sigma_um=None, p=1.3, weights=None)
  Method: check_inputs(self, recording, bad_channel_ids)
    Docstring:
      None

Class: NormalizeByQuantileRecording
  Docstring:
    Rescale the traces from the given recording extractor with a scalar
    and offset. First, the median and quantiles of the distribution are estimated.
    Then the distribution is rescaled and offset so that the scale is given by the
    distance between the quantiles (1st and 99th by default) is set to `scale`,
    and the median is set to the given median.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
    scale : float, default: 1.0
        Scale for the output distribution
    median : float, default: 0.0
        Median for the output distribution
    q1 : float, default: 0.01
        Lower quantile used for measuring the scale
    q2 : float, default: 0.99
        Upper quantile used for measuring the
    mode : "by_channel" | "pool_channel", default: "by_channel"
        If "by_channel" each channel is rescaled independently.
    dtype : str or np.dtype, default: "float32"
        The dtype of the output traces
    **random_chunk_kwargs : Keyword arguments for `spikeinterface.core.get_random_data_chunk()` function
    
    Returns
    -------
    rescaled_traces : NormalizeByQuantileRecording
        The rescaled traces recording extractor object
  __init__(self, recording, scale=1.0, median=0.0, q1=0.01, q2=0.99, mode='by_channel', dtype='float32', **random_chunk_kwargs)

Class: NotchFilterRecording
  Docstring:
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be notch-filtered
    freq : int or float
        The target frequency in Hz of the notch filter
    q : int
        The quality factor of the notch filter
    dtype : None | dtype, default: None
        dtype of recording. If None, will take from `recording`
    margin_ms : float, default: 5.0
        Margin in ms on border to avoid border effect
    
    Returns
    -------
    filter_recording : NotchFilterRecording
        The notch-filtered recording extractor object
  __init__(self, recording, freq=3000, q=30, margin_ms=5.0, dtype=None)

Class: PhaseShiftRecording
  Docstring:
    This apply a phase shift to a recording to cancel the small sampling
    delay across for some recording system.
    
    This is particularly relevant for neuropixel recording.
    
    This code is inspired from from  IBL lib.
    https://github.com/int-brain-lab/ibllib/blob/master/ibllib/dsp/fourier.py
    and also the one from spikeglx
    https://billkarsh.github.io/SpikeGLX/help/dmx_vs_gbl/dmx_vs_gbl/
    
    Parameters
    ----------
    recording : Recording
        The recording. It need to have  "inter_sample_shift" in properties.
    margin_ms : float, default: 40.0
        Margin in ms for computation.
        40ms ensure a very small error when doing chunk processing
    inter_sample_shift : None or numpy array, default: None
        If "inter_sample_shift" is not in recording properties,
        we can externally provide one.
    dtype : None | str | dtype, default: None
        Dtype of input and output `recording` objects.
    
    
    Returns
    -------
    filter_recording : PhaseShiftRecording
        The phase shifted recording object
  __init__(self, recording, margin_ms=40.0, inter_sample_shift=None, dtype=None)

Class: RectifyRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, recording)

Class: RemoveArtifactsRecording
  Docstring:
    Removes stimulation artifacts from recording extractor traces. By default,
    artifact periods are zeroed-out (mode = "zeros"). This is only recommended
    for traces that are centered around zero (e.g. through a prior highpass
    filter); if this is not the case, linear and cubic interpolation modes are
    also available, controlled by the "mode" input argument.
    Note that several artifacts can be removed at once (potentially with
    distinct duration each), if labels are specified
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to remove artifacts from
    list_triggers : list of lists/arrays
        One list per segment of int with the stimulation trigger frames
    ms_before : float or None, default: 0.5
        Time interval in ms to remove before the trigger events.
        If None, then also ms_after must be None and a single sample is removed
    ms_after : float or None, default: 3.0
        Time interval in ms to remove after the trigger events.
        If None, then also ms_before must be None and a single sample is removed
    list_labels : list of lists/arrays or None
        One list per segment of labels with the stimulation labels for the given
        artifacts. labels should be strings, for JSON serialization.
        Required for "median" and "average" modes.
    mode : "zeros", "linear", "cubic", "average", "median", default: "zeros"
        Determines what artifacts are replaced by. Can be one of the following:
    
        - "zeros": Artifacts are replaced by zeros.
    
        - "median": The median over all artifacts is computed and subtracted for
            each occurence of an artifact
    
        - "average": The mean over all artifacts is computed and subtracted for each
            occurence of an artifact
    
        - "linear": Replacement are obtained through Linear interpolation between
           the trace before and after the artifact.
           If the trace starts or ends with an artifact period, the gap is filled
           with the closest available value before or after the artifact.
    
        - "cubic": Cubic spline interpolation between the trace before and after
           the artifact, referenced to evenly spaced fit points before and after
           the artifact. This is an option thatcan be helpful if there are
           significant LFP effects around the time of the artifact, but visual
           inspection of fit behaviour with your chosen settings is recommended.
           The spacing of fit points is controlled by "fit_sample_spacing", with
           greater spacing between points leading to a fit that is less sensitive
           to high frequency fluctuations but at the cost of a less smooth
           continuation of the trace.
           If the trace starts or ends with an artifact, the gap is filled with
           the closest available value before or after the artifact.
    fit_sample_spacing : float, default: 1.0
        Determines the spacing (in ms) of reference points for the cubic spline
        fit if mode = "cubic". Note : The actual fit samples are
        the median of the 5 data points around the time of each sample point to
        avoid excessive influence from hyper-local fluctuations.
    artifacts : dict or None, default: None
        If provided (when mode is "median" or "average") then it must be a dict with
        keys that are the labels of the artifacts, and values the artifacts themselves,
        on all channels (and thus bypassing ms_before and ms_after)
    sparsity : dict or None, default: None
        If provided (when mode is "median" or "average") then it must be a dict with
        keys that are the labels of the artifacts, and values that are boolean mask of
        the channels where the artifacts should be considered (for subtraction/scaling)
    scale_amplitude : False, default: False
        If true, then for mode "median" or "average" the amplitude of the template
        will be scaled in amplitude at each time occurence to minimize residuals
    time_jitter : float, default: 0
        If non 0, then for mode "median" or "average", a time jitter in ms
        can be allowed to minimize the residuals
    waveforms_kwargs : None
        Deprecated and ignored
    
    Returns
    -------
    removed_recording : RemoveArtifactsRecording
        The recording extractor after artifact removal
  __init__(self, recording, list_triggers, ms_before=0.5, ms_after=3.0, mode='zeros', fit_sample_spacing=1.0, list_labels=None, artifacts=None, sparsity=None, scale_amplitude=False, time_jitter=0, waveforms_kwargs=None)

Class: ResampleRecording
  Docstring:
    Resample the recording extractor traces.
    
    If the original sampling rate is multiple of the resample_rate, it will use
    the signal.decimate method from scipy. In other cases, it uses signal.resample. In the
    later case, the resulting signal can have issues on the edges, mainly on the
    rightmost.
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be re-referenced
    resample_rate : int
        The resampling frequency
    margin_ms : float, default: 100.0
        Margin in ms for computations, will be used to decrease edge effects.
    dtype : dtype or None, default: None
        The dtype of the returned traces. If None, the dtype of the parent recording is used.
    skip_checks : bool, default: False
        If True, checks on sampling frequencies and cutoff filter frequencies are skipped
    
    Returns
    -------
    resample_recording : ResampleRecording
        The resampled recording extractor object.
  __init__(self, recording, resample_rate, margin_ms=100.0, dtype=None, skip_checks=False)

Class: ScaleRecording
  Docstring:
    Scale traces from the given recording extractor with a scalar
    and offset. New traces = traces*scalar + offset.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
    gain : float or array
        Scalar for the traces of the recording extractor or array with scalars for each channel
    offset : float or array
        Offset for the traces of the recording extractor or array with offsets for each channel
    dtype : str or np.dtype, default: "float32"
        The dtype of the output traces
    
    Returns
    -------
    transform_traces : ScaleRecording
        The transformed traces recording extractor object
  __init__(self, recording, gain=1.0, offset=0.0, dtype='float32')

Class: SilencedPeriodsRecording
  Docstring:
    Silence user-defined periods from recording extractor traces. By default,
    periods are zeroed-out (mode = "zeros"). You can also fill the periods with noise.
    Note that both methods assume that traces that are centered around zero.
    If this is not the case, make sure you apply a filter or center function prior to
    silencing periods.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to silance periods
    list_periods : list of lists/arrays
        One list per segment of tuples (start_frame, end_frame) to silence
    noise_levels : array
        Noise levels if already computed
    seed : int | None, default: None
        Random seed for `get_noise_levels` and `NoiseGeneratorRecording`.
        If none, `get_noise_levels` uses `seed=0` and `NoiseGeneratorRecording` generates a random seed using `numpy.random.default_rng`.
    mode : "zeros" | "noise, default: "zeros"
        Determines what periods are replaced by. Can be one of the following:
    
        - "zeros": Artifacts are replaced by zeros.
    
        - "noise": The periods are filled with a gaussion noise that has the
                   same variance that the one in the recordings, on a per channel
                   basis
    **random_chunk_kwargs : Keyword arguments for `spikeinterface.core.get_random_data_chunk()` function
    
    Returns
    -------
    silence_recording : SilencedPeriodsRecording
        The recording extractor after silencing some periods
  __init__(self, recording, list_periods, mode='zeros', noise_levels=None, seed=None, **random_chunk_kwargs)

Class: UnsignedToSignedRecording
  Docstring:
    Converts a recording with unsigned traces to a signed one.
    
    Parameters
    ----------
    recording : Recording
        The recording to be signed.
    bit_depth : int or None, default: None
        In case the bit depth of the ADC does not match that of the data type,
        it specifies the bit depth of the ADC to estimate the offset.
        For example, a `bit_depth` of 12 will correct for an offset of `2**11`
  __init__(self, recording, bit_depth=None)

Class: WhitenRecording
  Docstring:
    Whitens the recording extractor traces.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be whitened.
    dtype : None or dtype, default: None
        Datatype of the output recording (covariance matrix estimation
        and whitening are performed in float32).
        If None the the parent dtype is kept.
        For integer dtype a int_scale must be also given.
    mode : "global" | "local", default: "global"
        "global" use the entire covariance matrix to compute the W matrix
        "local" use local covariance (by radius) to compute the W matrix
    radius_um : None or float, default: None
        Used for mode = "local" to get the neighborhood
    apply_mean : bool, default: False
        Substract or not the mean matrix M before the dot product with W.
    int_scale : None or float, default: None
        Apply a scaling factor to fit the integer range.
        This is used when the dtype is an integer, so that the output is scaled.
        For example, a value of `int_scale=200` will scale the traces value to a standard deviation of 200.
    eps : float or None, default: None
        Small epsilon to regularize SVD.
        If None, eps is defaulted to 1e-8. If the data is float type and scaled down to very small values,
        then the eps is automatically set to a small fraction (1e-3) of the median of the squared data.
    W : 2d np.array or None, default: None
        Pre-computed whitening matrix
    M : 1d np.array or None, default: None
        Pre-computed means.
        M can be None when previously computed with apply_mean=False
    regularize : bool, default: False
        Boolean to decide if we want to regularize the covariance matrix, using a chosen method
        of sklearn, specified in regularize_kwargs. Default is GraphicalLassoCV
    regularize_kwargs : {'method' : 'GraphicalLassoCV'}
        Dictionary of the parameters that could be provided to the method of sklearn, if
        the covariance matrix needs to be regularized. Note that sklearn covariance methods
        that are implemented as functions, not classes, are not supported.
    **random_chunk_kwargs : Keyword arguments for `spikeinterface.core.get_random_data_chunk()` function
    
    Returns
    -------
    whitened_recording : WhitenRecording
        The whitened recording extractor
  __init__(self, recording, dtype=None, apply_mean=False, regularize=False, regularize_kwargs=None, mode='global', radius_um=100.0, int_scale=None, eps=None, W=None, M=None, **random_chunk_kwargs)

Class: ZScoreRecording
  Docstring:
    Centers traces from the given recording extractor by removing the median of each channel
    and dividing by the MAD.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be centered
    mode : "median+mad" | "mean+std", default: "median+mad"
        The mode to compute the zscore
    dtype : None or dtype
        If None the the parent dtype is kept.
        For integer dtype a int_scale must be also given.
    gain : None or np.array
        Pre-computed gain.
    offset : None or np.array
        Pre-computed offset
    int_scale : None or float
        Apply a scaling factor to fit the integer range.
        This is used when the dtype is an integer, so that the output is scaled.
        For example, a value of `int_scale=200` will scale the zscore value to a standard deviation of 200.
    **random_chunk_kwargs : Keyword arguments for `spikeinterface.core.get_random_data_chunk()` function
    
    Returns
    -------
    centered_traces : ScaleRecording
        The centered traces recording extractor object
  __init__(self, recording, mode='median+mad', gain=None, offset=None, int_scale=None, dtype='float32', **random_chunk_kwargs)

Class: ZeroChannelPaddedRecording
  Docstring:
    Abstract class representing several a multichannel timeseries (or block of raw ephys traces).
    Internally handle list of RecordingSegment
  __init__(self, recording: 'BaseRecording', num_channels: 'int', channel_mapping: 'Union[list, None]' = None)

Function: causal_filter(recording, direction='forward', band=[300.0, 6000.0], btype='bandpass', filter_order=5, ftype='butter', filter_mode='sos', margin_ms=5.0, add_reflect_padding=False, coeff=None, dtype=None)
  Docstring:
    Generic causal filter built on top of the filter function.
    
    Parameters
    ----------
    recording : Recording
        The recording extractor to be re-referenced
    direction : "forward" | "backward", default: "forward"
        Direction of causal filter. The "backward" option flips the traces in time before applying the filter
        and then flips them back.
    band : float or list, default: [300.0, 6000.0]
        If float, cutoff frequency in Hz for "highpass" filter type
        If list. band (low, high) in Hz for "bandpass" filter type
    btype : "bandpass" | "highpass", default: "bandpass"
        Type of the filter
    margin_ms : float, default: 5.0
        Margin in ms on border to avoid border effect
    coeff : array | None, default: None
        Filter coefficients in the filter_mode form.
    dtype : dtype or None, default: None
        The dtype of the returned traces. If None, the dtype of the parent recording is used
    add_reflect_padding : Bool, default False
        If True, uses a left and right margin during calculation.
    filter_order : order
        The order of the filter for `scipy.signal.iirfilter`
    filter_mode :  "sos" | "ba", default: "sos"
        Filter form of the filter coefficients for `scipy.signal.iirfilter`:
        - second-order sections ("sos")
        - numerator/denominator : ("ba")
    ftype : str, default: "butter"
        Filter type for `scipy.signal.iirfilter` e.g. "butter", "cheby1".
    
    Returns
    -------
    filter_recording : FilterRecording
        The causal-filtered recording extractor object

Function: compute_motion(recording: 'BaseRecording', preset: "Literal['dredge', 'medicine', 'dredge_fast', 'nonrigid_accurate', 'nonrigid_fast_and_accurate', 'rigid_fast', 'kilosort_like']" = 'dredge_fast', detect_kwargs: 'dict' = {}, select_kwargs: 'dict' = {}, localize_peaks_kwargs: 'dict' = {}, estimate_motion_kwargs: 'dict' = {}, output_motion_info: 'bool' = False, folder: 'str | Path | None' = None, overwrite: 'bool' = False, raise_error: 'bool' = True, **job_kwargs) -> 'dict'
  Docstring:
    Function to compute motion correction based on a preset e.g. 'dredge' or 'medicine'.
    
    This function has some intermediate steps that can be controlled one by one with parameters:
      * detect peaks
      * (optional) sub-sample peaks to speed up the localization
      * localize peaks
      * estimate the motion
    
    The recording must be preprocessed (filter and denoised at least), and we recommend to not use whitening before motion
    estimation. Since the motion interpolation requires a "float" recording, the recording is cast to float32 if necessary.
    
    Parameters for each step are handled as separate dictionaries.
    For more information please check the documentation of the following functions:
    
      * :py:func:`~spikeinterface.sortingcomponents.peak_detection.detect_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.peak_selection.select_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.peak_localization.localize_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.motion.motion.estimate_motion`
    
    
      * dredge: Official Dredge preset
      * medicine: Medicine method: https://jazlab.github.io/medicine/
      * dredge_fast: Modified and faster Dredge preset
      * nonrigid_accurate: method by Paninski lab (monopolar_triangulation + decentralized)
      * nonrigid_fast_and_accurate: mixed methods by KS & Paninski lab (grid_convolution + decentralized)
      * rigid_fast: Rigid and not super accurate but fast. Use center of mass.
      * kilosort_like: Mimic the drift correction of kilosort (grid_convolution + iterative_template)
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
    preset : str, default: "nonrigid_accurate"
        The preset name
    folder : Path str or None, default: None
        If not None then intermediate motion info are saved into a folder
    overwrite : bool, default: False
        If True and folder is given, overwrite the folder if it already exists
    detect_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "detect" step.
    select_kwargs : dict
        If not None, optional parameters to overwrite the ones in the preset for "select" step.
        If None, the "select" step is skipped.
    localize_peaks_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "localize" step.
    estimate_motion_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "estimate_motion" step.
    interpolate_motion_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "detect" step.
    output_motion_info : bool, default: False
        If True, then the function returns a `motion_info` dictionary that contains variables
        to check intermediate steps (motion_histogram, non_rigid_windows, pairwise_displacement)
        This dictionary is the same when reloaded from the folder.
    raise_error : bool, default: True
        If True, an error is raised if motion estimation fails
        If False, the process continues and the peaks and peak_locations are still returned in `motion_info`.
    
    Returns
    =======
    motion_info : dict
        A dictionary containing a motion objects, peaks, peak locations, run_times and the parameters used to compute these.

Function: compute_whitening_matrix(recording, mode, random_chunk_kwargs, apply_mean, radius_um=None, eps=None, regularize=False, regularize_kwargs=None)
  Docstring:
    Compute whitening matrix
    
    Parameters
    ----------
    recording : BaseRecording
        The recording object
    mode : str
        The mode to compute the whitening matrix.
    
        * "global": compute SVD using all channels
        * "local": compute SVD on local neighborhood (controlled by `radius_um`)
    
    random_chunk_kwargs : dict
        Keyword arguments for  get_random_data_chunks()
    apply_mean : bool
        If True, the mean is removed prior to computing the covariance
    radius_um : float or None, default: None
        Used for mode = "local" to get the neighborhood
    eps : float or None, default: None
        Small epsilon to regularize SVD. If None, the default is set to 1e-8, but if the data is float type and scaled
        down to very small values, eps is automatically set to a small fraction (1e-3) of the median of the squared data.
    regularize : bool, default: False
        Boolean to decide if we want to regularize the covariance matrix, using a chosen method
        of sklearn, specified in regularize_kwargs. Default is GraphicalLassoCV
    regularize_kwargs : {'method' : 'GraphicalLassoCV'}
        Dictionary of the parameters that could be provided to the method of sklearn, if
        the covariance matrix needs to be regularized.
    Returns
    -------
    W : 2D array
        The whitening matrix
    M : 2D array or None
        The "mean" matrix

Function: correct_lsb(recording, num_chunks_per_segment=20, chunk_size=10000, seed=None, verbose=False)
  Docstring:
    Estimates the LSB of the recording and divide traces by LSB
    to ensure LSB = 1. Medians are also subtracted to avoid rounding errors.
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be LSB-corrected.
    num_chunks_per_segment : int, default: 20
        Number of chunks per segment for random chunk
    chunk_size : int, default: 10000
        Size of a chunk in number for random chunk
    seed : int or None, default: None
        Random seed for random chunk
    verbose : bool, default: False
        If True, estimate LSB value is printed
    
    Returns
    -------
    correct_lsb_recording : ScaleRecording
        The recording extractor with corrected LSB

Function: correct_motion(recording: 'BaseRecording', preset: "Literal['dredge', 'medicine', 'dredge_fast', 'nonrigid_accurate', 'nonrigid_fast_and_accurate', 'rigid_fast', 'kilosort_like']" = 'dredge_fast', folder: 'str | Path | None' = None, output_motion: 'bool' = False, output_motion_info: 'bool' = False, overwrite: 'bool' = False, detect_kwargs: 'dict' = {}, select_kwargs: 'dict' = {}, localize_peaks_kwargs: 'dict' = {}, estimate_motion_kwargs: 'dict' = {}, interpolate_motion_kwargs: 'dict' = {}, **job_kwargs)
  Docstring:
    High-level function that estimates the motion and interpolates the recording.
    
    This function has some intermediate steps that can be controlled one by one with parameters:
      * detect peaks
      * (optional) sub-sample peaks to speed up the localization
      * localize peaks
      * estimate the motion
      * create and return a `InterpolateMotionRecording` recording object
    
    Even if this function is convenient, we recommend to run all step separately for fine tuning.
    
    Optionally, this function can create a folder with files and figures ready to check.
    
    This function depends on several modular components of :py:mod:`spikeinterface.sortingcomponents`.
    
    If `select_kwargs` is None then all peak are used for localized.
    
    The recording must be preprocessed (filter and denoised at least), and we recommend to not use whitening before motion
    estimation. Since the motion interpolation requires a "float" recording, the recording is cast to float32 if necessary.
    
    Parameters for each step are handled as separate dictionaries.
    For more information please check the documentation of the following functions:
    
      * :py:func:`~spikeinterface.sortingcomponents.peak_detection.detect_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.peak_selection.select_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.peak_localization.localize_peaks`
      * :py:func:`~spikeinterface.sortingcomponents.motion.motion.estimate_motion`
      * :py:func:`~spikeinterface.sortingcomponents.motion.motion.interpolate_motion`
    
    
    Possible presets : 
      * dredge: Official Dredge preset
      * medicine: Medicine method: https://jazlab.github.io/medicine/
      * dredge_fast: Modified and faster Dredge preset
      * nonrigid_accurate: method by Paninski lab (monopolar_triangulation + decentralized)
      * nonrigid_fast_and_accurate: mixed methods by KS & Paninski lab (grid_convolution + decentralized)
      * rigid_fast: Rigid and not super accurate but fast. Use center of mass.
      * kilosort_like: Mimic the drift correction of kilosort (grid_convolution + iterative_template)
    
    
    Parameters
    ----------
    recording : RecordingExtractor
        The recording extractor to be transformed
    preset : str, default: "nonrigid_accurate"
        The preset name
    folder : Path str or None, default: None
        If not None then intermediate motion info are saved into a folder
    overwrite : bool, default: False
        If True and folder is given, overwrite the folder if it already exists
    detect_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "detect" step.
    select_kwargs : dict
        If not None, optional parameters to overwrite the ones in the preset for "select" step.
        If None, the "select" step is skipped.
    localize_peaks_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "localize" step.
    estimate_motion_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "estimate_motion" step.
    interpolate_motion_kwargs : dict
        Optional parameters to overwrite the ones in the preset for "detect" step.
    output_motion_info : bool, default: False
        If True, then the function returns a `motion_info` dictionary that contains variables
        to check intermediate steps (motion_histogram, non_rigid_windows, pairwise_displacement)
        This dictionary is the same when reloaded from the folder.
    output_motion : bool, default: False
        It True, the function returns a `motion` object.
    
    **job_kwargs : keyword arguments for parallel processing:
            * chunk_duration or chunk_size or chunk_memory or total_memory
                - chunk_size : int
                    Number of samples per chunk
                - chunk_memory : str
                    Memory usage for each job (e.g. "100M", "1G", "500MiB", "2GiB")
                - total_memory : str
                    Total memory usage (e.g. "500M", "2G")
                - chunk_duration : str or float or None
                    Chunk duration in s if float or with units if str (e.g. "1s", "500ms")
            * n_jobs : int | float
                Number of jobs to use. With -1 the number of jobs is the same as number of cores.
                Using a float between 0 and 1 will use that fraction of the total cores.
            * progress_bar : bool
                If True, a progress bar is printed
            * mp_context : "fork" | "spawn" | None, default: None
                Context for multiprocessing. It can be None, "fork" or "spawn".
                Note that "fork" is only safely available on LINUX systems
    
    
    Returns
    -------
    recording_corrected : Recording
        The motion corrected recording
    motion : Motion
        Optional output if `output_motion=True`.
    motion_info : dict
        Optional output if `output_motion_info=True`. This dict contains several variable for
        for plotting. See `plot_motion_info()`

Function: detect_bad_channels(recording: 'BaseRecording', method: 'str' = 'coherence+psd', std_mad_threshold: 'float' = 5, psd_hf_threshold: 'float' = 0.02, dead_channel_threshold: 'float' = -0.5, noisy_channel_threshold: 'float' = 1.0, outside_channel_threshold: 'float' = -0.75, outside_channels_location: "Literal['top', 'bottom', 'both']" = 'top', n_neighbors: 'int' = 11, nyquist_threshold: 'float' = 0.8, direction: "Literal['x', 'y', 'z']" = 'y', chunk_duration_s: 'float' = 0.3, num_random_chunks: 'int' = 100, welch_window_ms: 'float' = 10.0, highpass_filter_cutoff: 'float' = 300, neighborhood_r2_threshold: 'float' = 0.9, neighborhood_r2_radius_um: 'float' = 30.0, seed: 'int | None' = None)
  Docstring:
    Perform bad channel detection.
    The recording is assumed to be filtered. If not, a highpass filter is applied on the fly.
    
    Different methods are implemented:
    
    * std : threhshold on channel standard deviations
        If the standard deviation of a channel is greater than `std_mad_threshold` times the median of all
        channels standard deviations, the channel is flagged as noisy
    * mad : same as std, but using median absolute deviations instead
    * coeherence+psd : method developed by the International Brain Laboratory that detects bad channels of three types:
        * Dead channels are those with low similarity to the surrounding channels (n=`n_neighbors` median)
        * Noise channels are those with power at >80% Nyquist above the psd_hf_threshold (default 0.02 uV^2 / Hz)
          and a high coherence with "far away" channels"
        * Out of brain channels are contigious regions of channels dissimilar to the median of all channels
          at the top end of the probe (i.e. large channel number)
    * neighborhood_r2
        A method tuned for LFP use-cases, where channels should be highly correlated with their spatial
        neighbors. This method estimates the correlation of each channel with the median of its spatial
        neighbors, and considers channels bad when this correlation is too small.
    
    Parameters
    ----------
    recording : BaseRecording
        The recording for which bad channels are detected
    method : "coeherence+psd" | "std" | "mad" | "neighborhood_r2", default: "coeherence+psd"
        The method to be used for bad channel detection
    std_mad_threshold : float, default: 5
        The standard deviation/mad multiplier threshold
    psd_hf_threshold : float, default: 0.02
        For coherence+psd - an absolute threshold (uV^2/Hz) used as a cutoff for noise channels.
        Channels with average power at >80% Nyquist larger than this threshold
        will be labeled as noise
    dead_channel_threshold : float, default: -0.5
        For coherence+psd - threshold for channel coherence below which channels are labeled as dead
    noisy_channel_threshold : float, default: 1
        Threshold for channel coherence above which channels are labeled as noisy (together with psd condition)
    outside_channel_threshold : float, default: -0.75
        For coherence+psd - threshold for channel coherence above which channels at the edge of the recording are marked as outside
        of the brain
    outside_channels_location : "top" | "bottom" | "both", default: "top"
        For coherence+psd - location of the outside channels. If "top", only the channels at the top of the probe can be
        marked as outside channels. If "bottom", only the channels at the bottom of the probe can be
        marked as outside channels. If "both", both the channels at the top and bottom of the probe can be
        marked as outside channels
    n_neighbors : int, default: 11
        For coeherence+psd - number of channel neighbors to compute median filter (needs to be odd)
    nyquist_threshold : float, default: 0.8
        For coherence+psd - frequency with respect to Nyquist (Fn=1) above which the mean of the PSD is calculated and compared
        with psd_hf_threshold
    direction : "x" | "y" | "z", default: "y"
        For coherence+psd - the depth dimension
    highpass_filter_cutoff : float, default: 300
        If the recording is not filtered, the cutoff frequency of the highpass filter
    chunk_duration_s : float, default: 0.5
        Duration of each chunk
    num_random_chunks : int, default: 100
        Number of random chunks
        Having many chunks is important for reproducibility.
    welch_window_ms : float, default: 10
        Window size for the scipy.signal.welch that will be converted to nperseg
    neighborhood_r2_threshold : float, default: 0.95
        R^2 threshold for the neighborhood_r2 method.
    neighborhood_r2_radius_um : float, default: 30
        Spatial radius below which two channels are considered neighbors in the neighborhood_r2 method.
    seed : int or None, default: None
        The random seed to extract chunks
    
    Returns
    -------
    bad_channel_ids : np.array
        The identified bad channel ids
    channel_labels : np.array of str
        Channels labels depending on the method:
          * (coeherence+psd) good/dead/noise/out
          * (std, mad) good/noise
    
    Examples
    --------
    
    >>> import spikeinterface.preprocessing as spre
    >>> bad_channel_ids, channel_labels = spre.detect_bad_channels(recording, method="coherence+psd")
    >>> # remove bad channels
    >>> recording_clean = recording.remove_channels(bad_channel_ids)
    
    Notes
    -----
    For details refer to:
    International Brain Laboratory et al. (2022). Spike sorting pipeline for the International Brain Laboratory.
    https://www.internationalbrainlab.com/repro-ephys

Function: get_motion_parameters_preset(preset)
  Docstring:
    Get the parameters tree for a given preset for motion correction.
    
    Parameters
    ----------
    preset : str, default: None
        The preset name. See available presets using `spikeinterface.preprocessing.get_motion_presets()`.

Function: get_motion_presets()
  Docstring:
    None

Function: get_spatial_interpolation_kernel(source_location, target_location, method='kriging', sigma_um=20.0, p=1, num_closest=4, sparse_thresh=None, dtype='float32', force_extrapolate=False)
  Docstring:
    Compute the spatial kernel for linear spatial interpolation.
    
    This is used for interpolation of bad channels or to correct the drift
    by interpolating between contacts.
    
    For reference, here is a simple overview on spatial interpolation:
    https://www.aspexit.com/spatial-data-interpolation-tin-idw-kriging-block-kriging-co-kriging-what-are-the-differences/
    
    Parameters
    ----------
    source_location: array shape (m, 2)
        The recording extractor to be transformed
    target_location: array shape (n, 2)
        Scale for the output distribution
    method: "kriging" | "idw" | "nearest", default: "kriging"
        Choice of the method
            "kriging" : the same one used in kilosort
            "idw" : inverse  distance weithed
            "nearest" : use neareast channel
    sigma_um : float or list, default: 20.0
        Used in the "kriging" formula. When list, it needs to have 2 elements (for the x and y directions).
    p: int, default: 1
        Used in the "kriging" formula
    sparse_thresh: None or float, default: None
        If not None for "kriging" force small value to be zeros to get a sparse matrix.
    num_closest: int, default: 4
        Used for "idw"
    force_extrapolate: bool, default: False
        How to handle when target location are outside source location.
        When False :  no extrapolation all target location outside are set to zero.
        When True : extrapolation done with the formula of the method.
                    In that case the sum of the kernel is not force to be 1.
    
    Returns
    -------
    interpolation_kernel: array (m, n)

Function: load_motion_info(folder)
  Docstring:
    None

Function: save_motion_info(motion_info, folder, overwrite=False)
  Docstring:
    None

Function: scale_to_uV(recording: 'BasePreprocessor') -> 'BasePreprocessor'
  Docstring:
    Scale raw traces to microvolts (µV).
    
    This preprocessor uses the channel-specific gain and offset information
    stored in the recording extractor to convert the raw traces to µV units.
    
    Parameters
    ----------
    recording : BaseRecording
        The recording extractor to be scaled. The recording extractor must
        have gains and offsets otherwise an error will be raised.
    
    Raises
    ------
    AssertionError
        If the recording extractor does not have scaleable traces.

Function: train_deepinterpolation(recordings: 'BaseRecording | list[BaseRecording]', model_folder: 'str | Path', model_name: 'str', desired_shape: 'tuple[int, int]', train_start_s: 'Optional[float]' = None, train_end_s: 'Optional[float]' = None, train_duration_s: 'Optional[float]' = None, test_start_s: 'Optional[float]' = None, test_end_s: 'Optional[float]' = None, test_duration_s: 'Optional[float]' = None, test_recordings: 'Optional[BaseRecording | list[BaseRecording]]' = None, pre_frame: 'int' = 30, post_frame: 'int' = 30, pre_post_omission: 'int' = 1, existing_model_path: 'Optional[str | Path]' = None, verbose: 'bool' = True, nb_gpus: 'int' = 1, steps_per_epoch: 'int' = 10, period_save: 'int' = 100, apply_learning_decay: 'int' = 0, nb_times_through_data: 'int' = 1, learning_rate: 'float' = 0.0001, loss: 'str' = 'mean_squared_error', nb_workers: 'int' = -1, caching_validation: 'bool' = False, run_uid: 'str' = 'si', network: 'Callable | None' = None, use_gpu: 'bool' = True, disable_tf_logger: 'bool' = True, memory_gpu: 'Optional[int]' = None)
  Docstring:
    Train a deepinterpolation model from a recording extractor.
    
    Parameters
    ----------
    recordings : BaseRecording | list[BaseRecording]
        The recording(s) to be deepinteprolated. If a list is given, the recordings are concatenated
        and samples at the border of the recordings are omitted.
    model_folder : str | Path
        Path to the folder where the model will be saved
    model_name : str
        Name of the model to be used
    train_start_s : float or None, default: None
        Start time of the training in seconds. If None, the training starts at the beginning of the recording
    train_end_s : float or None, default: None
        End time of the training in seconds. If None, the training ends at the end of the recording
    train_duration_s : float, default: None
        Duration of the training in seconds. If None, the entire [train_start_s, train_end_s] is used for training
    test_start_s : float or None, default: None
        Start time of the testing in seconds. If None, the testing starts at the beginning of the recording
    test_end_s : float or None, default: None
        End time of the testing in seconds. If None, the testing ends at the end of the recording
    test_duration_s : float or None, default: None
        Duration of the testing in seconds, If None, the entire [test_start_s, test_end_s] is used for testing (not recommended)
    test_recordings : BaseRecording | list[BaseRecording] | None, default: None
        The recording(s) used for testing. If None, the training recording (or recordings) is used for testing
    desired_shape : tuple
        Shape of the input to the network
    pre_frame : int
        Number of frames before the frame to be predicted
    post_frame : int
        Number of frames after the frame to be predicted
    pre_post_omission : int
        Number of frames to be omitted before and after the frame to be predicted
    existing_model_path : str | Path | None, default: None
        Path to an existing model to be used for transfer learning
    verbose : bool, default: True
        Whether to print the progress of the training
    steps_per_epoch : int, default: 10
        Number of steps per epoch
    period_save : int, default: 100
        Period of saving the model
    apply_learning_decay : int, default: 0
        Whether to use a learning scheduler during training
    nb_times_through_data : int, default: 1
        Number of times the data is repeated during training
    learning_rate : float, default: 0.0001
        Learning rate
    loss : str, default: "mean_squared_error"
        Loss function to be used
    nb_workers : int, default: -1
        Number of workers to be used for the training
    caching_validation : bool, default: False
        Whether to cache the validation data
    run_uid : str, default: "si"
        Unique identifier for the training
    network : Callable or None, default: None
        Name deepinterpolation network to use. If None, the "unet_single_ephys_1024" network is used.
        The network should be a callable that takes a dictionary as input and returns a deepinterpolation network.
        See deepinterpolation.network_collection for examples.
    use_gpu : bool, default: True
        Whether to use GPU
    disable_tf_logger : bool, default: True
        Whether to disable the tensorflow logger
    memory_gpu : int, default: None
        Amount of memory to be used by the GPU
    
    Returns
    -------
    model_path : Path
        Path to the model

==== DELIM ====
API for module: spikeinterface.qualitymetrics

Class: ComputeQualityMetrics
  Docstring:
    Compute quality metrics on a `sorting_analyzer`.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    metric_names : list or None
        List of quality metrics to compute.
    metric_params : dict of dicts or None
        Dictionary with parameters for quality metrics calculation.
        Default parameters can be obtained with: `si.qualitymetrics.get_default_qm_params()`
    skip_pc_metrics : bool, default: False
        If True, PC metrics computation is skipped.
    delete_existing_metrics : bool, default: False
        If True, any quality metrics attached to the `sorting_analyzer` are deleted. If False, any metrics which were previously calculated but are not included in `metric_names` are kept.
    
    Returns
    -------
    metrics: pandas.DataFrame
        Data frame with the computed metrics.
    
    Notes
    -----
    principal_components are loaded automatically if already computed.
  __init__(self, sorting_analyzer)

Function: calculate_pc_metrics(sorting_analyzer, metric_names=None, metric_params=None, unit_ids=None, seed=None, n_jobs=1, progress_bar=False)
  Docstring:
    None

Function: compute_amplitude_cutoffs(sorting_analyzer, peak_sign='neg', num_histogram_bins=500, histogram_smoothing_value=3, amplitudes_bins_min_ratio=5, unit_ids=None)
  Docstring:
    Calculate approximate fraction of spikes missing from a distribution of amplitudes.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    peak_sign : "neg" | "pos" | "both", default: "neg"
        The sign of the peaks.
    num_histogram_bins : int, default: 100
        The number of bins to use to compute the amplitude histogram.
    histogram_smoothing_value : int, default: 3
        Controls the smoothing applied to the amplitude histogram.
    amplitudes_bins_min_ratio : int, default: 5
        The minimum ratio between number of amplitudes for a unit and the number of bins.
        If the ratio is less than this threshold, the amplitude_cutoff for the unit is set
        to NaN.
    unit_ids : list or None
        List of unit ids to compute the amplitude cutoffs. If None, all units are used.
    
    Returns
    -------
    all_fraction_missing : dict of floats
        Estimated fraction of missing spikes, based on the amplitude distribution, for each unit ID.
    
    
    Notes
    -----
    This approach assumes the amplitude histogram is symmetric (not valid in the presence of drift).
    If available, amplitudes are extracted from the "spike_amplitude" extension (recommended).
    If the "spike_amplitude" extension is not available, the amplitudes are extracted from the SortingAnalyzer,
    which usually has waveforms for a small subset of spikes (500 by default).
    
    References
    ----------
    Inspired by metric described in [Hill]_
    
    This code was adapted from:
    https://github.com/AllenInstitute/ecephys_spike_sorting/tree/master/ecephys_spike_sorting/modules/quality_metrics

Function: compute_amplitude_cv_metrics(sorting_analyzer, average_num_spikes_per_bin=50, percentiles=(5, 95), min_num_bins=10, amplitude_extension='spike_amplitudes', unit_ids=None)
  Docstring:
    Calculate coefficient of variation of spike amplitudes within defined temporal bins.
    From the distribution of coefficient of variations, both the median and the "range" (the distance between the
    percentiles defined by `percentiles` parameter) are returned.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    average_num_spikes_per_bin : int, default: 50
        The average number of spikes per bin. This is used to estimate a temporal bin size using the firing rate
        of each unit. For example, if a unit has a firing rate of 10 Hz, amd the average number of spikes per bin is
        100, then the temporal bin size will be 100/10 Hz = 10 s.
    percentiles : tuple, default: (5, 95)
        The percentile values from which to calculate the range.
    min_num_bins : int, default: 10
        The minimum number of bins to compute the median and range. If the number of bins is less than this then
        the median and range are set to NaN.
    amplitude_extension : str, default: "spike_amplitudes"
        The name of the extension to load the amplitudes from. "spike_amplitudes" or "amplitude_scalings".
    unit_ids : list or None
        List of unit ids to compute the amplitude spread. If None, all units are used.
    
    Returns
    -------
    amplitude_cv_median : dict
        The median of the CV
    amplitude_cv_range : dict
        The range of the CV, computed as the distance between the percentiles.
    
    Notes
    -----
    Designed by Simon Musall and Alessio Buccino.

Function: compute_amplitude_medians(sorting_analyzer, peak_sign='neg', unit_ids=None)
  Docstring:
    Compute median of the amplitude distributions (in absolute value).
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    peak_sign : "neg" | "pos" | "both", default: "neg"
        The sign of the peaks.
    unit_ids : list or None
        List of unit ids to compute the amplitude medians. If None, all units are used.
    
    Returns
    -------
    all_amplitude_medians : dict
        Estimated amplitude median for each unit ID.
    
    References
    ----------
    Inspired by metric described in [IBL]_
    This code is ported from:
    https://github.com/int-brain-lab/ibllib/blob/master/brainbox/metrics/single_units.py

Function: compute_drift_metrics(sorting_analyzer, interval_s=60, min_spikes_per_interval=100, direction='y', min_fraction_valid_intervals=0.5, min_num_bins=2, return_positions=False, unit_ids=None)
  Docstring:
    Compute drifts metrics using estimated spike locations.
    Over the duration of the recording, the drift signal for each unit is calculated as the median
    position in an interval with respect to the overall median positions over the entire duration
    (reference position).
    
    The following metrics are computed for each unit (in um):
    
    * drift_ptp: peak-to-peak of the drift signal
    * drift_std: standard deviation of the drift signal
    * drift_mad: median absolute deviation of the drift signal
    
    Requires "spike_locations" extension. If this is not present, metrics are set to NaN.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    interval_s : int, default: 60
        Interval length is seconds for computing spike depth.
    min_spikes_per_interval : int, default: 100
        Minimum number of spikes for computing depth in an interval.
    direction : "x" | "y" | "z", default: "y"
        The direction along which drift metrics are estimated.
    min_fraction_valid_intervals : float, default: 0.5
        The fraction of valid (not NaN) position estimates to estimate drifts.
        E.g., if 0.5 at least 50% of estimated positions in the intervals need to be valid,
        otherwise drift metrics are set to None.
    min_num_bins : int, default: 2
        Minimum number of bins required to return a valid metric value. In case there are
        less bins, the metric values are set to NaN.
    return_positions : bool, default: False
        If True, median positions are returned (for debugging).
    unit_ids : list or None, default: None
        List of unit ids to compute the drift metrics. If None, all units are used.
    
    Returns
    -------
    drift_ptp : dict
        The drift signal peak-to-peak in um.
    drift_std : dict
        The drift signal standard deviation in um.
    drift_mad : dict
        The drift signal median absolute deviation in um.
    median_positions : np.array (optional)
        The median positions of each unit over time (only returned if return_positions=True).
    
    Notes
    -----
    For multi-segment object, segments are concatenated before the computation. This means that if
    there are large displacements in between segments, the resulting metric values will be very high.

Function: compute_firing_ranges(sorting_analyzer, bin_size_s=5, percentiles=(5, 95), unit_ids=None)
  Docstring:
    Calculate firing range, the range between the 5th and 95th percentiles of the firing rates distribution
    computed in non-overlapping time bins.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object
    bin_size_s : float, default: 5
        The size of the bin in seconds.
    percentiles : tuple, default: (5, 95)
        The percentiles to compute.
    unit_ids : list or None
        List of unit ids to compute the firing range. If None, all units are used.
    
    Returns
    -------
    firing_ranges : dict
        The firing range for each unit.
    
    Notes
    -----
    Designed by Simon Musall and ported to SpikeInterface by Alessio Buccino.

Function: compute_firing_rates(sorting_analyzer, unit_ids=None)
  Docstring:
    Compute the firing rate across segments.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    unit_ids : list or None
        The list of unit ids to compute the firing rate. If None, all units are used.
    
    Returns
    -------
    firing_rates : dict of floats
        The firing rate, across all segments, for each unit ID.

Function: compute_isi_violations(sorting_analyzer, isi_threshold_ms=1.5, min_isi_ms=0, unit_ids=None)
  Docstring:
    Calculate Inter-Spike Interval (ISI) violations.
    
    It computes several metrics related to isi violations:
        * isi_violations_ratio: the relative firing rate of the hypothetical neurons that are
                                generating the ISI violations. See Notes.
        * isi_violation_count: number of ISI violations
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object.
    isi_threshold_ms : float, default: 1.5
        Threshold for classifying adjacent spikes as an ISI violation, in ms.
        This is the biophysical refractory period.
    min_isi_ms : float, default: 0
        Minimum possible inter-spike interval, in ms.
        This is the artificial refractory period enforced.
        by the data acquisition system or post-processing algorithms.
    unit_ids : list or None
        List of unit ids to compute the ISI violations. If None, all units are used.
    
    Returns
    -------
    isi_violations_ratio : dict
        The isi violation ratio.
    isi_violation_count : dict
        Number of violations.
    
    Notes
    -----
    The returned ISI violations ratio approximates the fraction of spikes in each
    unit which are contaminted. The formulation assumes that the contaminating spikes
    are statistically independent from the other spikes in that cluster. This
    approximation can break down in reality, especially for highly contaminated units.
    See the discussion in Section 4.1 of [Llobet]_ for more details.
    
    This method counts the number of spikes whose isi is violated. If there are three
    spikes within `isi_threshold_ms`, the first and second are violated. Hence there are two
    spikes which have been violated.  This is is contrast to `compute_refrac_period_violations`,
    which counts the number of violations.
    
    References
    ----------
    Based on metrics originally implemented in Ultra Mega Sort [UMS]_.
    
    This implementation is based on one of the original implementations written in Matlab by Nick Steinmetz
    (https://github.com/cortex-lab/sortingQuality) and converted to Python by Daniel Denman.

Function: compute_num_spikes(sorting_analyzer, unit_ids=None, **kwargs)
  Docstring:
    Compute the number of spike across segments.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    unit_ids : list or None
        The list of unit ids to compute the number of spikes. If None, all units are used.
    
    Returns
    -------
    num_spikes : dict
        The number of spikes, across all segments, for each unit ID.

Function: compute_pc_metrics(sorting_analyzer, metric_names=None, metric_params=None, qm_params=None, unit_ids=None, seed=None, n_jobs=1, progress_bar=False, mp_context=None, max_threads_per_worker=None) -> 'dict'
  Docstring:
    Calculate principal component derived metrics.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    metric_names : list of str, default: None
        The list of PC metrics to compute.
        If not provided, defaults to all PC metrics.
    metric_params : dict or None
        Dictionary with parameters for each PC metric function.
    unit_ids : list of int or None
        List of unit ids to compute metrics for.
    seed : int, default: None
        Random seed value.
    n_jobs : int
        Number of jobs to parallelize metric computations.
    progress_bar : bool
        If True, progress bar is shown.
    
    Returns
    -------
    pc_metrics : dict
        The computed PC metrics.

Function: compute_presence_ratios(sorting_analyzer, bin_duration_s=60.0, mean_fr_ratio_thresh=0.0, unit_ids=None)
  Docstring:
    Calculate the presence ratio, the fraction of time the unit is firing above a certain threshold.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    bin_duration_s : float, default: 60
        The duration of each bin in seconds. If the duration is less than this value,
        presence_ratio is set to NaN.
    mean_fr_ratio_thresh : float, default: 0
        The unit is considered active in a bin if its firing rate during that bin.
        is strictly above `mean_fr_ratio_thresh` times its mean firing rate throughout the recording.
    unit_ids : list or None
        The list of unit ids to compute the presence ratio. If None, all units are used.
    
    Returns
    -------
    presence_ratio : dict of floats
        The presence ratio for each unit ID.
    
    Notes
    -----
    The total duration, across all segments, is divided into "num_bins".
    To do so, spike trains across segments are concatenated to mimic a continuous segment.

Function: compute_refrac_period_violations(sorting_analyzer, refractory_period_ms: 'float' = 1.0, censored_period_ms: 'float' = 0.0, unit_ids=None)
  Docstring:
    Calculate the number of refractory period violations.
    
    This is similar (but slightly different) to the ISI violations.
    
    This is required for some formulas (e.g. the ones from Llobet & Wyngaard 2022).
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        The SortingAnalyzer object.
    refractory_period_ms : float, default: 1.0
        The period (in ms) where no 2 good spikes can occur.
    censored_period_ms : float, default: 0.0
        The period (in ms) where no 2 spikes can occur (because they are not detected, or
        because they were removed by another mean).
    unit_ids : list or None
        List of unit ids to compute the refractory period violations. If None, all units are used.
    
    Returns
    -------
    rp_contamination : dict
        The refactory period contamination described in [Llobet]_.
    rp_violations : dict
        Number of refractory period violations.
    
    Notes
    -----
    Requires "numba" package
    
    This method counts the number of violations which occur during the refactory period.
    For example, if there are three spikes within `refractory_period_ms`, the second and third spikes
    violate the first spike and the third spike violates the second spike. Hence there
    are three violations. This is in contrast to `compute_isi_violations`, which
    computes the number of spikes which have been violated.
    
    References
    ----------
    Based on metrics described in [Llobet]_

Function: compute_sd_ratio(sorting_analyzer: 'SortingAnalyzer', censored_period_ms: 'float' = 4.0, correct_for_drift: 'bool' = True, correct_for_template_itself: 'bool' = True, unit_ids=None, **kwargs)
  Docstring:
    Computes the SD (Standard Deviation) of each unit's spike amplitudes, and compare it to the SD of noise.
    In this case, noise refers to the global voltage trace on the same channel as the best channel of the unit.
    (ideally (not implemented yet), the noise would be computed outside of spikes from the unit itself).
    
    TODO: Take jitter into account.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    censored_period_ms : float, default: 4.0
        The censored period in milliseconds. This is to remove any potential bursts that could affect the SD.
    correct_for_drift : bool, default: True
        If True, will subtract the amplitudes sequentiially to significantly reduce the impact of drift.
    correct_for_template_itself : bool, default:  True
        If true, will take into account that the template itself impacts the standard deviation of the noise,
        and will make a rough estimation of what that impact is (and remove it).
    unit_ids : list or None, default: None
        The list of unit ids to compute this metric. If None, all units are used.
    **kwargs : dict, default: {}
        Keyword arguments for computing spike amplitudes and extremum channel.
    
    Returns
    -------
    num_spikes : dict
        The number of spikes, across all segments, for each unit ID.

Function: compute_sliding_rp_violations(sorting_analyzer, min_spikes=0, bin_size_ms=0.25, window_size_s=1, exclude_ref_period_below_ms=0.5, max_ref_period_ms=10, contamination_values=None, unit_ids=None)
  Docstring:
    Compute sliding refractory period violations, a metric developed by IBL which computes
    contamination by using a sliding refractory period.
    This metric computes the minimum contamination with at least 90% confidence.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    min_spikes : int, default: 0
        Contamination  is set to np.nan if the unit has less than this many
        spikes across all segments.
    bin_size_ms : float, default: 0.25
        The size of binning for the autocorrelogram in ms.
    window_size_s : float, default: 1
        Window in seconds to compute correlogram.
    exclude_ref_period_below_ms : float, default: 0.5
        Refractory periods below this value are excluded.
    max_ref_period_ms : float, default: 10
        Maximum refractory period to test in ms.
    contamination_values : 1d array or None, default: None
        The contamination values to test, If None, it is set to np.arange(0.5, 35, 0.5).
    unit_ids : list or None
        List of unit ids to compute the sliding RP violations. If None, all units are used.
    
    Returns
    -------
    contamination : dict of floats
        The minimum contamination at 90% confidence.
    
    References
    ----------
    Based on metrics described in [IBL]_
    This code was adapted from:
    https://github.com/SteinmetzLab/slidingRefractory/blob/1.0.0/python/slidingRP/metrics.py

Function: compute_snrs(sorting_analyzer, peak_sign: 'str' = 'neg', peak_mode: 'str' = 'extremum', unit_ids=None)
  Docstring:
    Compute signal to noise ratio.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    peak_sign : "neg" | "pos" | "both", default: "neg"
        The sign of the template to compute best channels.
    peak_mode : "extremum" | "at_index" | "peak_to_peak", default: "extremum"
        How to compute the amplitude.
        Extremum takes the maxima/minima
        At_index takes the value at t=sorting_analyzer.nbefore.
    unit_ids : list or None
        The list of unit ids to compute the SNR. If None, all units are used.
    
    Returns
    -------
    snrs : dict
        Computed signal to noise ratio for each unit.

Function: compute_synchrony_metrics(sorting_analyzer, unit_ids=None, synchrony_sizes=None)
  Docstring:
    Compute synchrony metrics. Synchrony metrics represent the rate of occurrences of
    spikes at the exact same sample index, with synchrony sizes 2, 4 and 8.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    unit_ids : list or None, default: None
        List of unit ids to compute the synchrony metrics. If None, all units are used.
    synchrony_sizes: None, default: None
        Deprecated argument. Please use private `_get_synchrony_counts` if you need finer control over number of synchronous spikes.
    
    Returns
    -------
    sync_spike_{X} : dict
        The synchrony metric for synchrony size X.
    
    References
    ----------
    Based on concepts described in [Grün]_
    This code was adapted from `Elephant - Electrophysiology Analysis Toolkit <https://github.com/NeuralEnsemble/elephant/blob/master/elephant/spike_train_synchrony.py#L245>`_

Function: get_default_qm_params()
  Docstring:
    Return default dictionary of quality metrics parameters.
    
    Returns
    -------
    dict
        Default qm parameters with metric name as key and parameter dictionary as values.

Function: get_quality_metric_list()
  Docstring:
    Return a list of the available quality metrics.

Function: get_quality_pca_metric_list()
  Docstring:
    Get a list of the available PCA-based quality metrics.

Function: lda_metrics(all_pcs, all_labels, this_unit_id) -> 'float'
  Docstring:
    Calculate d-prime based on Linear Discriminant Analysis.
    
    Parameters
    ----------
    all_pcs : 2d array
        The PCs for all spikes, organized as [num_spikes, PCs].
    all_labels : 1d array
        The cluster labels for all spikes. Must have length of number of spikes.
    this_unit_id : int
        The ID for the unit to calculate these metrics for.
    
    Returns
    -------
    d_prime : float
        D prime measure of this unit.
    
    References
    ----------
    Based on metric described in [Hill]_

Function: mahalanobis_metrics(all_pcs, all_labels, this_unit_id)
  Docstring:
    Calculate isolation distance and L-ratio (metrics computed from Mahalanobis distance).
    
    Parameters
    ----------
    all_pcs : 2d array
        The PCs for all spikes, organized as [num_spikes, PCs].
    all_labels : 1d array
        The cluster labels for all spikes. Must have length of number of spikes.
    this_unit_id : int
        The ID for the unit to calculate these metrics for.
    
    Returns
    -------
    isolation_distance : float
        Isolation distance of this unit.
    l_ratio : float
        L-ratio for this unit.
    
    References
    ----------
    Based on metrics described in [Schmitzer-Torbert]_

Function: nearest_neighbors_isolation(sorting_analyzer, this_unit_id: 'int | str', n_spikes_all_units: 'dict' = None, fr_all_units: 'dict' = None, max_spikes: 'int' = 1000, min_spikes: 'int' = 10, min_fr: 'float' = 0.0, n_neighbors: 'int' = 5, n_components: 'int' = 10, radius_um: 'float' = 100, peak_sign: 'str' = 'neg', min_spatial_overlap: 'float' = 0.5, seed=None)
  Docstring:
    Calculate unit isolation based on NearestNeighbors search in PCA space.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    this_unit_id : int | str
        The ID for the unit to calculate these metrics for.
    n_spikes_all_units : dict, default: None
        Dictionary of the form ``{<unit_id>: <n_spikes>}`` for the waveform extractor.
        Recomputed if None.
    fr_all_units : dict, default: None
        Dictionary of the form ``{<unit_id>: <firing_rate>}`` for the waveform extractor.
        Recomputed if None.
    max_spikes : int, default: 1000
        Max number of spikes to use per unit.
    min_spikes : int, default: 10
        Min number of spikes a unit must have to go through with metric computation.
        Units with spikes < min_spikes gets numpy.NaN as the quality metric,
        and are ignored when selecting other units' neighbors.
    min_fr : float, default: 0.0
        Min firing rate a unit must have to go through with metric computation.
        Units with firing rate < min_fr gets numpy.NaN as the quality metric,
        and are ignored when selecting other units' neighbors.
    n_neighbors : int, default: 5
        Number of neighbors to check membership of.
    n_components : int, default: 10
        The number of PC components to use to project the snippets to.
    radius_um : float, default: 100
        The radius, in um, that channels need to be within the peak channel to be included.
    peak_sign : "neg" | "pos" | "both", default: "neg"
        The peak_sign used to compute sparsity and neighbor units. Used if sorting_analyzer
        is not sparse already.
    min_spatial_overlap : float, default: 100
        In case sorting_analyzer is sparse, other units are selected if they share at least
        `min_spatial_overlap` times `n_target_unit_channels` with the target unit.
    seed : int, default: None
        Seed for random subsampling of spikes.
    
    Returns
    -------
    nn_isolation : float
        The calculation nearest neighbor isolation metric for `this_unit_id`.
        If the unit has fewer than `min_spikes`, returns numpy.NaN instead.
    nn_unit_id : np.int16
        Id of the "nearest neighbor" unit (unit with lowest isolation score from `this_unit_id`).
    
    Notes
    -----
    The overall logic of this approach is:
    
    #. Choose a cluster
    #. Compute the isolation score with every other cluster
    #. Isolation score is defined as the min of 2. (i.e. 'worst-case measure')
    
    The implementation of this approach is:
    
    Let A and B be two clusters from sorting.
    
    We set \|A\| = \|B\|:
    
        * | If max_spikes < \|A\| and max_spikes < \|B\|:
          |     Then randomly subsample max_spikes samples from A and B.
        * | If max_spikes > min(\|A\|, \|B\|) (e.g. \|A\| > max_spikes > \|B\|):
          |     Then randomly subsample min(\|A\|, \|B\|) samples from A and B.
    
    This is because the metric is affected by the size of the clusters being compared
    independently of how well-isolated they are.
    
    We also restrict the waveforms to channels with significant signal.
    
    See docstring for `_compute_isolation` for the definition of isolation score.
    
    References
    ----------
    Based on isolation metric described in [Chung]_

Function: nearest_neighbors_metrics(all_pcs, all_labels, this_unit_id, max_spikes, n_neighbors)
  Docstring:
    Calculate unit contamination based on NearestNeighbors search in PCA space.
    
    Parameters
    ----------
    all_pcs : 2d array
        The PCs for all spikes, organized as [num_spikes, PCs].
    all_labels : 1d array
        The cluster labels for all spikes. Must have length of number of spikes.
    this_unit_id : int
        The ID for the unit to calculate these metrics for.
    max_spikes : int
        The number of spikes to use, per cluster.
        Note that the calculation can be very slow when this number is >20000.
    n_neighbors : int
        The number of neighbors to use.
    
    Returns
    -------
    hit_rate : float
        Fraction of neighbors for target cluster that are also in target cluster.
    miss_rate : float
        Fraction of neighbors outside target cluster that are in target cluster.
    
    Notes
    -----
    A is a (hopefully) representative subset of cluster X
    
    .. math::
    
        NN_hit(X) = 1/k \sum_i=1^k |{{x in A such that ith closest neighbor is in X}}| / \|A\|
    
    References
    ----------
    Based on metrics described in [Chung]_

Function: nearest_neighbors_noise_overlap(sorting_analyzer, this_unit_id: 'int | str', n_spikes_all_units: 'dict' = None, fr_all_units: 'dict' = None, max_spikes: 'int' = 1000, min_spikes: 'int' = 10, min_fr: 'float' = 0.0, n_neighbors: 'int' = 5, n_components: 'int' = 10, radius_um: 'float' = 100, peak_sign: 'str' = 'neg', seed=None)
  Docstring:
    Calculate unit noise overlap based on NearestNeighbors search in PCA space.
    
    Parameters
    ----------
    sorting_analyzer : SortingAnalyzer
        A SortingAnalyzer object.
    this_unit_id : int | str
        The ID of the unit to calculate this metric on.
    n_spikes_all_units : dict, default: None
        Dictionary of the form ``{<unit_id>: <n_spikes>}`` for the waveform extractor.
        Recomputed if None.
    fr_all_units : dict, default: None
        Dictionary of the form ``{<unit_id>: <firing_rate>}`` for the waveform extractor.
        Recomputed if None.
    max_spikes : int, default: 1000
        The max number of spikes to use per cluster.
    min_spikes : int, default: 10
        Min number of spikes a unit must have to go through with metric computation.
        Units with spikes < min_spikes gets numpy.NaN as the quality metric.
    min_fr : float, default: 0.0
        Min firing rate a unit must have to go through with metric computation.
        Units with firing rate < min_fr gets numpy.NaN as the quality metric.
    n_neighbors : int, default: 5
        The number of neighbors to check membership.
    n_components : int, default: 10
        The number of PC components to use to project the snippets to.
    radius_um : float, default: 100
        The radius, in um, that channels need to be within the peak channel to be included.
    peak_sign : "neg" | "pos" | "both", default: "neg"
        The peak_sign used to compute sparsity and neighbor units. Used if sorting_analyzer
        is not sparse already.
    seed : int, default: 0
        Random seed for subsampling spikes.
    
    Returns
    -------
    nn_noise_overlap : float
        The computed nearest neighbor noise estimate.
        If the unit has fewer than `min_spikes`, returns numpy.NaN instead.
    
    Notes
    -----
    The general logic of this measure is:
    
    1. Generate a noise cluster by randomly sampling voltage snippets from recording.
    2. Subtract projection onto the weighted average of noise snippets
       of both the target and noise clusters to correct for bias in sampling.
    3. Compute the isolation score between the noise cluster and the target cluster.
    
    As with nn_isolation, the clusters that are compared (target and noise clusters)
    have the same number of spikes.
    
    See docstring for `_compute_isolation` for the definition of isolation score.
    
    References
    ----------
    Based on noise overlap metric described in [Chung]_

Function: silhouette_score(all_pcs, all_labels, this_unit_id)
  Docstring:
    Calculate the silhouette score which is a marker of cluster quality ranging from
    -1 (bad clustering) to 1 (good clustering). Distances are all calculated as pairwise
    comparisons of all data points.
    
    Parameters
    ----------
    all_pcs : 2d array
        The PCs for all spikes, organized as [num_spikes, PCs].
    all_labels : 1d array
        The cluster labels for all spikes. Must have length of number of spikes.
    this_unit_id : int
        The ID for the unit to calculate this metric for.
    
    Returns
    -------
    unit_silhouette_score : float
        Silhouette Score for this unit.
    
    References
    ----------
    Based on [Rousseeuw]_

Function: simplified_silhouette_score(all_pcs, all_labels, this_unit_id)
  Docstring:
    Calculate the simplified silhouette score for each cluster. The value ranges
    from -1 (bad clustering) to 1 (good clustering). The simplified silhoutte score
    utilizes the centroids for distance calculations rather than pairwise calculations.
    
    Parameters
    ----------
    all_pcs : 2d array
        The PCs for all spikes, organized as [num_spikes, PCs].
    all_labels : 1d array
        The cluster labels for all spikes. Must have length of number of spikes.
    this_unit_id : int
        The ID for the unit to calculate this metric for.
    
    Returns
    -------
    unit_silhouette_score : float
        Simplified Silhouette Score for this unit.
    
    References
    ----------
    Based on simplified silhouette score suggested by [Hruschka]_

==== DELIM ====
API for module: spikeinterface.sorters

Class: BaseSorter
  Docstring:
    Base Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: get_result(self)
    Docstring:
      None
  Method: run(self, raise_error=True)
    Docstring:
      Main function kept for backward compatibility.
      This should not be used anymore but still works.
  Method: set_params(self, sorter_params)
    Docstring:
      Mimic the old API
      This should not be used anymore but still works.

Class: CombinatoSorter
  Docstring:
    Combinato Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: get_sorter_version()
    Docstring:
      None
  Method: set_combinato_path(combinato_path: 'PathType')
    Docstring:
      None

Class: ContainerClient
  Docstring:
    Small abstraction class to run commands in:
      * docker with "docker" python package
      * singularity with  "spython" python package
  __init__(self, mode, container_image, volumes, py_user_base, extra_kwargs)
  Method: run_command(self, command)
    Docstring:
      None
  Method: start(self)
    Docstring:
      None
  Method: stop(self)
    Docstring:
      None

Class: HDSortSorter
  Docstring:
    HDSort Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: set_hdsort_path(hdsort_path: 'PathType')
    Docstring:
      None

Class: HerdingspikesSorter
  Docstring:
    HerdingSpikes Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: IronClustSorter
  Docstring:
    IronClust Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: set_ironclust_path(ironclust_path: 'PathType')
    Docstring:
      None

Class: Kilosort2Sorter
  Docstring:
    Kilosort2 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: Kilosort2_5Sorter
  Docstring:
    Kilosort2.5 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: set_kilosort2_5_path(kilosort2_5_path: 'PathType')
    Docstring:
      None

Class: Kilosort3Sorter
  Docstring:
    Kilosort3 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: set_kilosort3_path(kilosort3_path: 'PathType')
    Docstring:
      None

Class: Kilosort4Sorter
  Docstring:
    Kilosort4 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: KilosortSorter
  Docstring:
    Kilosort Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: KlustaSorter
  Docstring:
    Klusta Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: Mountainsort4Sorter
  Docstring:
    Mountainsort4 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: get_sorter_version()
    Docstring:
      None

Class: Mountainsort5Sorter
  Docstring:
    Mountainsort5 Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: get_sorter_version()
    Docstring:
      None

Class: PyKilosortSorter
  Docstring:
    Pykilosort Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: SimpleSorter
  Docstring:
    Implementation of a very simple sorter usefull for teaching.
    The idea is quite old school:
      * detect peaks
      * project waveforms with SVD or PCA
      * apply a well known clustering algos from scikit-learn
    
      No template matching. No auto cleaning.
    
      Mainly usefull for few channels (1 to 8), teaching and testing.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: Spykingcircus2Sorter
  Docstring:
    This is a based class for sorter based on spikeinterface.sortingcomponents
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: SpykingcircusSorter
  Docstring:
    SpykingCircus Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)
  Method: get_sorter_version()
    Docstring:
      None

Class: Tridesclous2Sorter
  Docstring:
    This is a based class for sorter based on spikeinterface.sortingcomponents
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: TridesclousSorter
  Docstring:
    Tridesclous Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: WaveClusSnippetsSorter
  Docstring:
    WaveClus Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: WaveClusSorter
  Docstring:
    WaveClus Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Class: YassSorter
  Docstring:
    YASS Sorter object.
  __init__(self, recording=None, output_folder=None, verbose=False, remove_existing_folder=False, delete_output_folder=False)

Function: available_sorters()
  Docstring:
    Lists available sorters.

Function: get_default_sorter_params(sorter_name_or_class) -> 'dict'
  Docstring:
    Returns default parameters for the specified sorter.
    
    Parameters
    ----------
    sorter_name_or_class : str or SorterClass
        The sorter to retrieve default parameters from.
    
    Returns
    -------
    default_params : dict
        Dictionary with default params for the specified sorter.

Function: get_sorter_description(sorter_name_or_class) -> 'dict'
  Docstring:
    Returns a brief description for the specified sorter.
    
    Parameters
    ----------
    sorter_name_or_class : str or SorterClass
        The sorter to retrieve description from.
    
    Returns
    -------
    params_description : dict
        Dictionary with parameter description.

Function: get_sorter_params_description(sorter_name_or_class) -> 'dict'
  Docstring:
    Returns a description of the parameters for the specified sorter.
    
    Parameters
    ----------
    sorter_name_or_class : str or SorterClass
        The sorter to retrieve parameters description from.
    
    Returns
    -------
    params_description : dict
        Dictionary with parameter description

Function: install_package_in_container(container_client, package_name, installation_mode='pypi', extra=None, version=None, tag=None, github_url=None, container_folder_source=None, verbose=False)
  Docstring:
    Install a package in a container with different modes:
    
    * pypi: pip install package_name
    * github: pip install {github_url}/archive/{tag/version}.tar.gz#egg=package_name
    * folder: pip install folder
    
    Parameters
    ----------
    container_client : ContainerClient
        The container client
    package_name : str
        The package name
    installation_mode : str
        The installation mode
    extra : str
        Extra pip install arguments, e.g. [full]
    version : str
        The package version to install
    tag : str
        The github tag to install
    github_url : str
        The github url to install (needed for github mode)
    container_folder_source : str
        The container folder source (needed for folder mode)
    verbose : bool
        If True, print output of pip install command
    
    Returns
    -------
    res_output : str
        The output of the pip install command

Function: installed_sorters()
  Docstring:
    Lists installed sorters.

Function: print_sorter_versions()
  Docstring:
    "Prints the versions of the installed sorters.

Function: read_sorter_folder(folder, register_recording=True, sorting_info=True, raise_error=True)
  Docstring:
    Load a sorting object from a spike sorting output folder.
    The 'folder' must contain a valid 'spikeinterface_log.json' file
    
    
    Parameters
    ----------
    folder : Pth or str
        The sorter folder
    register_recording : bool, default: True
        Attach recording (when json or pickle) to the sorting
    sorting_info : bool, default: True
        Attach sorting info to the sorting
    raise_error : bool, detault: True
        Raise an error if the spike sorting failed

Function: run_sorter(sorter_name: 'str', recording: 'BaseRecording', folder: 'Optional[str]' = None, remove_existing_folder: 'bool' = False, delete_output_folder: 'bool' = False, verbose: 'bool' = False, raise_error: 'bool' = True, docker_image: 'Optional[Union[bool, str]]' = False, singularity_image: 'Optional[Union[bool, str]]' = False, delete_container_files: 'bool' = True, with_output: 'bool' = True, output_folder: 'None' = None, **sorter_params)
  Docstring:
    Generic function to run a sorter via function approach.
    
    
    Parameters
    ----------
    sorter_name : str
        The sorter name
    recording : RecordingExtractor
        The recording extractor to be spike sorted
    folder : str or Path
        Path to output folder
    remove_existing_folder : bool
        If True and folder exists then delete.
    delete_output_folder : bool, default: False
        If True, output folder is deleted
    verbose : bool, default: False
        If True, output is verbose
    raise_error : bool, default: True
        If True, an error is raised if spike sorting fails
        If False, the process continues and the error is logged in the log file.
    docker_image : bool or str, default: False
        If True, pull the default docker container for the sorter and run the sorter in that container using docker.
        Use a str to specify a non-default container. If that container is not local it will be pulled from docker hub.
        If False, the sorter is run locally
    singularity_image : bool or str, default: False
        If True, pull the default docker container for the sorter and run the sorter in that container using
        singularity. Use a str to specify a non-default container. If that container is not local it will be pulled
        from Docker Hub. If False, the sorter is run locally
    with_output : bool, default: True
        If True, the output Sorting is returned as a Sorting
    delete_container_files : bool, default: True
        If True, the container temporary files are deleted after the sorting is done
    output_folder : None, default: None
        Do not use. Deprecated output function to be removed in 0.103.
    **sorter_params : keyword args
        Spike sorter specific arguments (they can be retrieved with `get_default_sorter_params(sorter_name_or_class)`)
    
    Returns
    -------
    BaseSorting | None
        The spike sorted data (it `with_output` is True) or None (if `with_output` is False)
    
    
    Examples
    --------
    >>> sorting = run_sorter("tridesclous", recording)

Function: run_sorter_by_property(sorter_name, recording, grouping_property, folder, mode_if_folder_exists=None, engine='loop', engine_kwargs={}, verbose=False, docker_image=None, singularity_image=None, working_folder: 'None' = None, **sorter_params)
  Docstring:
    Generic function to run a sorter on a recording after splitting by a "grouping_property" (e.g. "group").
    
    Internally, the function works as follows:
        * the recording is split based on the provided "grouping_property" (using the "split_by" function)
        * the "run_sorters" function is run on the split recordings
        * sorting outputs are aggregated using the "aggregate_units" function
        * the "grouping_property" is added as a property to the SortingExtractor
    
    Parameters
    ----------
    sorter_name : str
        The sorter name
    recording : BaseRecording
        The recording to be sorted
    grouping_property : object
        Property to split by before sorting
    folder : str | Path
        The working directory.
    mode_if_folder_exists : bool or None, default: None
        Must be None. This is deprecated.
        If not None then a warning is raise.
        Will be removed in next release.
    engine : "loop" | "joblib" | "dask", default: "loop"
        Which engine to use to run sorter.
    engine_kwargs : dict
        This contains kwargs specific to the launcher engine:
            * "loop" : no kwargs
            * "joblib" : {"n_jobs" : } number of processes
            * "dask" : {"client":} the dask client for submitting task
    verbose : bool, default: False
        Controls sorter verboseness
    docker_image : None or str, default: None
        If str run the sorter inside a container (docker) using the docker package
    singularity_image : None or str, default: None
        If str run the sorter inside a container (singularity) using the docker package
    **sorter_params : keyword args
        Spike sorter specific arguments (they can be retrieved with `get_default_sorter_params(sorter_name_or_class)`)
    
    Returns
    -------
    sorting : UnitsAggregationSorting
        The aggregated SortingExtractor.
    
    Examples
    --------
    This example shows how to run spike sorting split by group using the "joblib" backend with 4 jobs for parallel
    processing.
    
    >>> sorting = si.run_sorter_by_property("tridesclous", recording, grouping_property="group",
                                            folder="sort_by_group", engine="joblib",
                                            engine_kwargs={"n_jobs": 4})

Function: run_sorter_container(sorter_name: 'str', recording: 'BaseRecording', mode: 'str', container_image: 'Optional[str]' = None, folder: 'Optional[str]' = None, remove_existing_folder: 'bool' = True, delete_output_folder: 'bool' = False, verbose: 'bool' = False, raise_error: 'bool' = True, with_output: 'bool' = True, delete_container_files: 'bool' = True, extra_requirements=None, installation_mode='auto', spikeinterface_version=None, spikeinterface_folder_source=None, output_folder: 'None' = None, **sorter_params)
  Docstring:
    Parameters
    ----------
    sorter_name : str
        The sorter name
    recording : BaseRecording
        The recording extractor to be spike sorted
    mode : str
        The container mode : "docker" or "singularity"
    container_image : str, default: None
        The container image name and tag. If None, the default container image is used
    output_folder : str, default: None
        Path to output folder
    remove_existing_folder : bool, default: True
        If True and output_folder exists yet then delete
    delete_output_folder : bool, default: False
        If True, output folder is deleted
    verbose : bool, default: False
        If True, output is verbose
    raise_error : bool, default: True
        If True, an error is raised if spike sorting fails
    with_output : bool, default: True
        If True, the output Sorting is returned as a Sorting
    delete_container_files : bool, default: True
        If True, the container temporary files are deleted after the sorting is done
    extra_requirements : list, default: None
        List of extra requirements to install in the container
    installation_mode : "auto" | "pypi" | "github" | "folder" | "dev" | "no-install", default: "auto"
        How spikeinterface is installed in the container:
          * "auto" : if host installation is a pip release then use "github" with tag
                    if host installation is DEV_MODE=True then use "dev"
          * "pypi" : use pypi with pip install spikeinterface
          * "github" : use github with `pip install git+https`
          * "folder" : mount a folder in container and install from this one.
                      So the version in the container is a different spikeinterface version from host, useful for
                      cross checks
          * "dev" : same as "folder", but the folder is the spikeinterface.__file__ to ensure same version as host
          * "no-install" : do not install spikeinterface in the container because it is already installed
    spikeinterface_version : str, default: None
        The spikeinterface version to install in the container. If None, the current version is used
    spikeinterface_folder_source : Path or None, default: None
        In case of installation_mode="folder", the spikeinterface folder source to use to install in the container
    **sorter_params : keyword args for the sorter

Function: run_sorter_jobs(job_list, engine='loop', engine_kwargs={}, return_output=False)
  Docstring:
    Run several :py:func:`run_sorter()` sequentially or in parallel given a list of jobs.
    
    For **engine="loop"** this is equivalent to:
    
    ..code::
    
        for job in job_list:
            run_sorter(**job)
    
    The following engines block the I/O:
      * "loop"
      * "joblib"
      * "multiprocessing"
      * "dask"
    
    The following engines are *asynchronous*:
      * "slurm"
    
    Where *blocking* means that this function is blocking until the results are returned.
    This is in opposition to *asynchronous*, where the function returns `None` almost immediately (aka non-blocking),
    but the results must be retrieved by hand when jobs are finished. No mechanisim is provided here to be know
    when jobs are finish.
    In this *asynchronous* case, the :py:func:`~spikeinterface.sorters.read_sorter_folder()` helps to retrieve individual results.
    
    
    Parameters
    ----------
    job_list : list of dict
        A list a dict that are propagated to run_sorter(...)
    engine : str "loop", "joblib", "dask", "slurm"
        The engine to run the list.
        * "loop" : a simple loop. This engine is
    engine_kwargs : dict
    
    return_output : bool, dfault False
        Return a sortings or None.
        This also overwrite kwargs in  in run_sorter(with_sorting=True/False)
    
    Returns
    -------
    sortings : None or list of sorting
        With engine="loop" or "joblib" you can optional get directly the list of sorting result if return_output=True.

Function: run_sorter_local(sorter_name, recording, folder=None, remove_existing_folder=True, delete_output_folder=False, verbose=False, raise_error=True, with_output=True, output_folder=None, **sorter_params)
  Docstring:
    Runs a sorter locally.
    
    Parameters
    ----------
    sorter_name : str
        The sorter name
    recording : RecordingExtractor
        The recording extractor to be spike sorted
    folder : str or Path
        Path to output folder. If None, a folder is created in the current directory
    remove_existing_folder : bool, default: True
        If True and output_folder exists yet then delete
    delete_output_folder : bool, default: False
        If True, output folder is deleted
    verbose : bool, default: False
        If True, output is verbose
    raise_error : bool, default: True
        If True, an error is raised if spike sorting fails.
        If False, the process continues and the error is logged in the log file
    with_output : bool, default: True
        If True, the output Sorting is returned as a Sorting
    output_folder : None, default: None
        Do not use. Deprecated output function to be removed in 0.103.
    **sorter_params : keyword args

==== DELIM ====
API for module: spikeinterface.widgets

Class: AgreementMatrixWidget
  Docstring:
        Plots sorting comparison agreement matrix.
    
        Parameters
        ----------
        sorting_comparison : GroundTruthComparison or SymmetricSortingComparison
            The sorting comparison object.
            Can optionally be symmetric if given a SymmetricSortingComparison
        ordered : bool, default: True
            Order units with best agreement scores.
            If True, agreement scores can be seen along a diagonal
        count_text : bool, default: True
            If True counts are displayed as text
        unit_ticks : bool, default: True
            If True unit tick labels are displayed
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_comparison, ordered=True, count_text=True, unit_ticks=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: AllAmplitudesDistributionsWidget
  Docstring:
        Plots distributions of amplitudes as violin plot for all or some units.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer
        unit_ids : list
            List of unit ids, default None
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: AmplitudesWidget
  Docstring:
        Plots spike amplitudes
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input waveform extractor
        unit_ids : list or None, default: None
            List of unit ids
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: None
            Number of max spikes per unit to display. Use None for all spikes
        y_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the amplitudes).
        scatter_decimate : int, default: 1
            If equal to n, each nth spike is kept for plotting.
        hide_unit_selector : bool, default: False
            If True the unit selector is not displayed
            (sortingview backend)
        plot_histogram : bool, default: False
            If True, an histogram of the amplitudes is plotted on the right axis
            (matplotlib backend)
        bins : int or None, default: None
            If plot_histogram is True, the number of bins for the amplitude histogram.
            If None, uses 100 bins.
        plot_legend : bool, default: True
            True includes legend in plot
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, segment_index=None, max_spikes_per_unit=None, y_lim=None, scatter_decimate=1, hide_unit_selector=False, plot_histograms=False, bins=None, plot_legend=True, backend=None, **backend_kwargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: AutoCorrelogramsWidget
  Docstring:
        Plots unit cross correlograms.
    
        Parameters
        ----------
        sorting_analyzer_or_sorting : SortingAnalyzer or BaseSorting
            The object to compute/get crosscorrelograms from
        unit_ids  list or None, default: None
            List of unit ids
        min_similarity_for_correlograms : float, default: 0.2
            For sortingview backend. Threshold for computing pair-wise cross-correlograms.
            If template similarity between two units is below this threshold, the cross-correlogram is not displayed.
            For auto-correlograms plot, this is automatically set to None.
        window_ms : float, default: 100.0
            Window for CCGs in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        bin_ms : float, default: 1.0
            Bin size in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, *args, **kargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: ComparisonCollisionBySimilarityWidget
  Docstring:
        Plots CollisionGTComparison pair by pair orderer by cosine_similarity
    
        Parameters
        ----------
        comp : CollisionGTComparison
            The collision ground truth comparison object
        templates : array
            template of units
        mode : "heatmap" or "lines"
            to see collision curves for every pairs ("heatmap") or as lines averaged over pairs.
        similarity_bins : array
            if mode is "lines", the bins used to average the pairs
        cmap : string
            colormap used to show averages if mode is "lines"
        metric : "cosine_similarity"
            metric for ordering
        good_only : True
            keep only the pairs with a non zero accuracy (found templates)
        min_accuracy : float
            If good only, the minimum accuracy every cell should have, individually, to be
            considered in a putative pair
        unit_ids : list
            List of considered units
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, comp, templates, unit_ids=None, metric='cosine_similarity', figure=None, ax=None, mode='heatmap', similarity_bins=array([-4.00000000e-01, -2.00000000e-01, -5.55111512e-17,  2.00000000e-01,
        4.00000000e-01,  6.00000000e-01,  8.00000000e-01,  1.00000000e+00]), cmap='winter', good_only=False, min_accuracy=0.9, show_legend=False, ylim=(0, 1), backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: ConfusionMatrixWidget
  Docstring:
        Plots sorting comparison confusion matrix.
    
        Parameters
        ----------
        gt_comparison : GroundTruthComparison
            The ground truth sorting comparison object
        count_text : bool
            If True counts are displayed as text
        unit_ticks : bool
            If True unit tick labels are displayed
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, gt_comparison, count_text=True, unit_ticks=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: CrossCorrelogramsWidget
  Docstring:
        Plots unit cross correlograms.
    
        Parameters
        ----------
        sorting_analyzer_or_sorting : SortingAnalyzer or BaseSorting
            The object to compute/get crosscorrelograms from
        unit_ids  list or None, default: None
            List of unit ids
        min_similarity_for_correlograms : float, default: 0.2
            For sortingview backend. Threshold for computing pair-wise cross-correlograms.
            If template similarity between two units is below this threshold, the cross-correlogram is not displayed.
            For auto-correlograms plot, this is automatically set to None.
        window_ms : float, default: 100.0
            Window for CCGs in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        bin_ms : float, default: 1.0
            Bin size in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer_or_sorting: 'Union[SortingAnalyzer, BaseSorting]', unit_ids=None, min_similarity_for_correlograms=0.2, window_ms=100.0, bin_ms=1.0, hide_unit_selector=False, unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: DriftRasterMapWidget
  Docstring:
        Plot the drift raster map from peaks or a SortingAnalyzer.
        The drift raster map is a scatter plot of the estimated peak depth vs time and it is
        useful to visualize the drift over the course of the recording.
    
        Parameters
        ----------
        peaks : np.array | None, default: None
            The peaks array, with dtype ("sample_index", "channel_index", "amplitude", "segment_index"),
            as returned by the `detect_peaks` or `correct_motion` functions.
        peak_locations : np.array | None, default: None
            The peak locations, with dtype ("x", "y") or ("x", "y", "z"), as returned by the
            `localize_peaks` or `correct_motion` functions.
        sorting_analyzer : SortingAnalyzer | None, default: None
            The sorting analyzer object. To use this function, the `SortingAnalyzer` must have the
            "spike_locations" extension computed.
        direction : "x" or "y", default: "y"
            The direction to display. "y" is the depth direction.
        segment_index : int, default: None
            The segment index to display.
        recording : RecordingExtractor | None, default: None
            The recording extractor object (only used to get "real" times).
        segment_index : int, default: 0
            The segment index to display.
        sampling_frequency : float, default: None
            The sampling frequency (needed if recording is None).
        depth_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the recording).
        scatter_decimate : int, default: None
            If equal to n, each nth spike is kept for plotting.
        color_amplitude : bool, default: True
            If True, the color of the scatter points is the amplitude of the peaks.
        cmap : str, default: "inferno"
            The colormap to use for the amplitude.
        color : str, default: "Gray"
            The color of the scatter points if color_amplitude is False.
        clim : tuple or None, default: None
            The min and max amplitude to display, if None (min and max of the amplitudes).
        alpha : float, default: 1
            The alpha of the scatter points.
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, peaks: 'np.array | None' = None, peak_locations: 'np.array | None' = None, sorting_analyzer: 'SortingAnalyzer | None' = None, direction: 'str' = 'y', recording: 'BaseRecording | None' = None, sampling_frequency: 'float | None' = None, segment_index: 'int | None' = None, depth_lim: 'tuple[float, float] | None' = None, color_amplitude: 'bool' = True, scatter_decimate: 'int | None' = None, cmap: 'str' = 'inferno', color: 'str' = 'Gray', clim: 'tuple[float, float] | None' = None, alpha: 'float' = 1, backend: 'str | None' = None, **backend_kwargs)

Class: ISIDistributionWidget
  Docstring:
        Plots spike train ISI distribution.
    
        Parameters
        ----------
        sorting : SortingExtractor
            The sorting extractor object
        unit_ids : list
            List of unit ids
        bins_ms : int
            Bin size in ms
        window_ms : float
            Window size in ms
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting, unit_ids=None, window_ms=100.0, bin_ms=1.0, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: LocationsWidget
  Docstring:
        Plots spike locations as a function of time
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input sorting analyzer
        unit_ids : list or None, default: None
            List of unit ids
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: None
            Number of max spikes per unit to display. Use None for all spikes
        plot_histogram : bool, default: False
            If True, an histogram of the locations is plotted on the right axis
            (matplotlib backend)
        bins : int or None, default: None
            If plot_histogram is True, the number of bins for the location histogram.
            If None this is automatically adjusted
        plot_legend : bool, default: True
            True includes legend in plot
        locations_axis : str, default: 'y'
            Which location axis to use when plotting locations.
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, segment_index=None, max_spikes_per_unit=None, plot_histograms=False, bins=None, plot_legend=True, locations_axis='y', backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: MotionInfoWidget
  Docstring:
        Plot motion information from the motion_info dictionary returned by the `correct_motion()` funciton.
        This widget plots:
            * the motion iself
            * the drift raster map (peak depth vs time) before correction
            * the drift raster map (peak depth vs time) after correction
    
        Parameters
        ----------
        motion_info : dict
            The motion info returned by correct_motion() or loaded back with load_motion_info().
        recording : RecordingExtractor
            The recording extractor object
        segment_index : int, default: None
            The segment index to display.
        sampling_frequency : float, default: None
            The sampling frequency (needed if recording is None).
        depth_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the recording).
        motion_lim : tuple or None, default: None
            The min and max motion to display, if None (min and max of the motion).
        scatter_decimate : int, default: None
            If equal to n, each nth spike is kept for plotting.
        color_amplitude : bool, default: False
            If True, the color of the scatter points is the amplitude of the peaks.
        amplitude_cmap : str, default: "inferno"
            The colormap to use for the amplitude.
        amplitude_color : str, default: "Gray"
            The color of the scatter points if color_amplitude is False.
        amplitude_clim : tuple or None, default: None
            The min and max amplitude to display, if None (min and max of the amplitudes).
        amplitude_alpha : float, default: 1
            The alpha of the scatter points.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, motion_info: 'dict', recording: 'BaseRecording', segment_index: 'int | None' = None, depth_lim: 'tuple[float, float] | None' = None, motion_lim: 'tuple[float, float] | None' = None, color_amplitude: 'bool' = False, scatter_decimate: 'int | None' = None, amplitude_cmap: 'str' = 'inferno', amplitude_color: 'str' = 'Gray', amplitude_clim: 'tuple[float, float] | None' = None, amplitude_alpha: 'float' = 1, backend: 'str | None' = None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: MotionWidget
  Docstring:
        Plot the Motion object.
    
        Parameters
        ----------
        motion : Motion
            The motion object.
        segment_index : int | None, default: None
            If Motion is multi segment, the must be not None.
        mode : "auto" | "line" | "map", default: "line"
            How to plot the motion.
            "line" plots estimated motion at different depths as lines.
            "map" plots estimated motion at different depths as a heatmap.
            "auto" makes it automatic depending on the number of motion depths.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, motion: 'Motion', segment_index: 'int | None' = None, mode: 'str' = 'line', motion_lim: 'float | None' = None, backend: 'str | None' = None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: MultiCompAgreementBySorterWidget
  Docstring:
        Plots multi comparison agreement as pie or bar plot.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        plot_type : "pie" | "bar", default: "pie
            The plot type
        cmap : matplotlib colormap, default: "Reds"
            The colormap to be used for the nodes
        fontsize : int, default: 9
            The text fontsize
        show_legend : bool
            Show the legend in the last axes
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, plot_type='pie', cmap='YlOrRd', fontsize=9, show_legend=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: MultiCompGlobalAgreementWidget
  Docstring:
        Plots multi comparison agreement as pie or bar plot.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        plot_type : "pie" | "bar", default: "pie"
            The plot type
        cmap : matplotlib colormap, default: "YlOrRd"
            The colormap to be used for the nodes
        fontsize : int, default: 9
            The text fontsize
        show_legend : bool, default: True
            If True a legend is shown
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, plot_type='pie', cmap='YlOrRd', fontsize=9, show_legend=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: MultiCompGraphWidget
  Docstring:
        Plots multi comparison graph.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        draw_labels : bool, default: False
            If True unit labels are shown
        node_cmap : matplotlib colormap, default: "viridis"
            The colormap to be used for the nodes
        edge_cmap : matplotlib colormap, default: "hot"
            The colormap to be used for the edges
        alpha_edges : float, default: 0.5
            Alpha value for edges
        colorbar : bool, default: False
            If True a colorbar for the edges is plotted
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, draw_labels=False, node_cmap='viridis', edge_cmap='hot', alpha_edges=0.5, colorbar=False, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: PeakActivityMapWidget
  Docstring:
        Plots spike rate (estimated with detect_peaks()) as 2D activity map.
    
        Can be static (bin_duration_s=None) or animated (bin_duration_s=60.)
    
        Parameters
        ----------
        recording : RecordingExtractor
            The recording extractor object.
        peaks : numpy array with peak_dtype
            The pre detected peaks (with the `detect_peaks()` function).
        bin_duration_s : None or float, default: None
            If None then static image
            If not None then it is an animation per bin.
        with_contact_color : bool, default: True
            Plot rates with contact colors
        with_interpolated_map : bool, default: True
            Plot rates with interpolated map
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        color_range : tuple | list | None, default: None
            Sets the color bar range when animating or plotting.
            When None, uses the min-max of the entire time-series via imshow defaults.
            If tuple/list, the length must be 2 representing the range.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, peaks, bin_duration_s=None, with_contact_color=True, with_interpolated_map=True, with_channel_ids=False, with_color_bar=True, color_range=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: PeaksOnProbeWidget
  Docstring:
        Generate a plot of spike peaks showing their location on a plot
        of the probe. Color scaling represents spike amplitude.
    
        The generated plot overlays the estimated position of a spike peak
        (as a single point for each peak) onto a plot of the probe. The
        dimensions of the plot are x axis: probe width, y axis: probe depth.
    
        Plots of different sets of peaks can be created on subplots, by
        passing a list of peaks and corresponding peak locations.
    
        Parameters
        ----------
        recording : Recording
            A SpikeInterface recording object.
        peaks : np.array | list[np.ndarray]
            SpikeInterface 'peaks' array created with `detect_peaks()`,
            an array of length num_peaks with entries:
                (sample_index, channel_index, amplitude, segment_index)
            To plot different sets of peaks in subplots, pass a list of peaks, each
            with a corresponding entry in a list passed to `peak_locations`.
        peak_locations : np.array | list[np.ndarray]
            A SpikeInterface 'peak_locations' array created with `localize_peaks()`.
            an array of length num_peaks with entries: (x, y)
            To plot multiple peaks in subplots, pass a list of `peak_locations`
            here with each entry having a corresponding `peaks`.
        segment_index : None | int, default: None
            If set, only peaks from this recording segment will be used.
        time_range : None | Tuple, default: None
            The time period over which to include peaks. If `None`, peaks
            across the entire recording will be shown.
        ylim : None | Tuple, default: None
            The y-axis limits (i.e. the probe depth). If `None`, the entire
            probe will be displayed.
        decimate : int, default: 5
            For performance reasons, every nth peak is shown on the plot,
            where n is set by decimate. To plot all peaks, set `decimate=1`.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, peaks, peak_locations, segment_index=None, time_range=None, ylim=None, decimate=5, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: PotentialMergesWidget
  Docstring:
        Plots potential merges
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input sorting analyzer
        potential_merges : list of lists or tuples
            List of potential merges (see `spikeinterface.curation.get_potential_auto_merges`)
        segment_index : int
            The segment index to display
        max_spike_samples : int or None, default: None
            The maximum number of spikes to display per unit
        backend: str
        
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', potential_merges: 'list', unit_colors: 'list' = None, segment_index: 'int' = 0, max_spikes_per_unit: 'int' = 100, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: ProbeMapWidget
  Docstring:
        Plot the probe of a recording.
    
        Parameters
        ----------
        recording : RecordingExtractor
            The recording extractor object
        color_channels : list or matplotlib color
            List of colors to be associated with each channel_id, if only one color is present all channels will take the specified color
        with_channel_ids : bool False default
            Add channel ids text on the probe
        **plot_probe_kwargs : keyword arguments for probeinterface.plotting.plot_probe_group() function
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, color_channels=None, with_channel_ids=False, backend=None, **backend_or_plot_probe_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: QualityMetricsWidget
  Docstring:
        Plots quality metrics distributions.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get quality metrics from
        unit_ids : list or None, default: None
            List of unit ids
        include_metrics : list or None, default: None
            If given, a list of quality metrics to include
        skip_metrics : list or None, default: None
            If given, a list of quality metrics to skip
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Class: RasterWidget
  Docstring:
        Plots spike train rasters.
    
        Parameters
        ----------
        sorting : SortingExtractor | None, default: None
            A sorting object
        sorting_analyzer : SortingAnalyzer  | None, default: None
            A sorting analyzer object
        segment_index : None or int
            The segment index.
        unit_ids : list
            List of unit ids
        time_range : list
            List with start time and end time
        color : matplotlib color
            The color to be used
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting=None, sorting_analyzer=None, segment_index=None, unit_ids=None, time_range=None, color='k', backend=None, **backend_kwargs)

Class: SortingSummaryWidget
  Docstring:
        Plots spike sorting summary.
        This is the main viewer to visualize the final result with several sub view.
        This use sortingview (in a web browser) or spikeinterface-gui (with Qt).
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_ids : list or None, default: None
            List of unit ids
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        max_amplitudes_per_unit : int or None, default: None
            Maximum number of spikes per unit for plotting amplitudes.
            If None, all spikes are plotted
        min_similarity_for_correlograms : float, default: 0.2
            Threshold for computing pair-wise cross-correlograms. If template similarity between two units
            is below this threshold, the cross-correlogram is not computed
            (sortingview backend)
        curation : bool, default: False
            If True, manual curation is enabled
            (sortingview backend)
        label_choices : list or None, default: None
            List of labels to be added to the curation table
            (sortingview backend)
        displayed_unit_properties : list or None, default: None
            List of properties to be added to the unit table.
            These may be drawn from the sorting extractor, and, if available,
            the quality_metrics/template_metrics/unit_locations extensions of the SortingAnalyzer.
            See all properties available with sorting.get_property_keys(), and, if available,
            analyzer.get_extension("quality_metrics").get_data().columns and
            analyzer.get_extension("template_metrics").get_data().columns.
        extra_unit_properties : dict or None, default: None
            A dict with extra units properties to display.
            The key is the property name and the value must be a numpy.array.
        curation_dict : dict or None, default: None
            When curation is True, optionaly the viewer can get a previous 'curation_dict'
            to continue/check  previous curations on this analyzer.
            In this case label_definitions must be None beacuse it is already included in the curation_dict.
            (spikeinterface_gui backend)
        label_definitions : dict or None, default: None
            When curation is True, optionaly the user can provide a label_definitions dict.
            This replaces the label_choices in the curation_format.
            (spikeinterface_gui backend)
        backend: str
        
        * sortingview
        * spikeinterface_gui
    
    **backend_kwargs: kwargs
        
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, sparsity=None, max_amplitudes_per_unit=None, min_similarity_for_correlograms=0.2, curation=False, displayed_unit_properties=None, extra_unit_properties=None, label_choices=None, curation_dict=None, label_definitions=None, backend=None, unit_table_properties=None, **backend_kwargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_spikeinterface_gui(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: SpikeLocationsWidget
  Docstring:
        Plots spike locations.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get spike locations from
        unit_ids : list or None, default: None
            List of unit ids
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: 500
            Number of max spikes per unit to display. Use None for all spikes.
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        plot_all_units : bool, default: True
            If True, all units are plotted. The unselected ones (not in unit_ids),
            are plotted in grey (matplotlib backend)
        plot_legend : bool, default: False
            If True, the legend is plotted (matplotlib backend)
        hide_axis : bool, default: False
            If True, the axis is set to off (matplotlib backend)
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, segment_index=None, max_spikes_per_unit=500, with_channel_ids=False, unit_colors=None, hide_unit_selector=False, plot_all_units=True, plot_legend=False, hide_axis=False, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: SpikesOnTracesWidget
  Docstring:
        Plots unit spikes/waveforms over traces.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        order_channel_by_depth : bool, default: False
            If true orders channel by depth
        time_range : list or None, default: None
            List with start time and end time in seconds
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        mode : "line" | "map" | "auto", default: "auto"
            * "line": classical for low channel count
            * "map": for high channel count use color heat map
            * "auto": auto switch depending on the channel count ("line" if less than 64 channels, "map" otherwise)
        return_scaled : bool, default: False
            If True and the recording has scaled traces, it plots the scaled traces
        cmap : str, default: "RdBu"
            matplotlib colormap used in mode "map"
        show_channel_ids : bool, default: False
            Set yticks with channel ids
        color_groups : bool, default: False
            If True groups are plotted with different colors
        color : str or None, default: None
            The color used to draw the traces
        clim : None, tuple or dict, default: None
            When mode is "map", this argument controls color limits.
            If dict, keys should be the same as recording keys
        scale : float, default: 1
            Scale factor for the traces
        with_colorbar : bool, default: True
            When mode is "map", a colorbar is added
        tile_size : int, default: 512
            For sortingview backend, the size of each tile in the rendered image
        seconds_per_row : float, default: 0.2
            For "map" mode and sortingview backend, seconds to render in each row
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', segment_index=None, channel_ids=None, unit_ids=None, order_channel_by_depth=False, time_range=None, unit_colors=None, sparsity=None, mode='auto', return_scaled=False, cmap='RdBu', show_channel_ids=False, color_groups=False, color=None, clim=None, tile_size=512, seconds_per_row=0.2, scale=1, spike_width_ms=4, spike_height_um=20, with_colorbar=True, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudyAgreementMatrix
  Docstring:
        Plot agreement matrix.
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        ordered : bool
            Order units with best agreement scores.
            This enable to see agreement on a diagonal.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, ordered=True, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudyComparisonCollisionBySimilarityWidget
  Docstring:
        Plots CollisionGTComparison pair by pair orderer by cosine_similarity for all
        cases in a study.
    
        Parameters
        ----------
        study : CollisionGTStudy
            The collision study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        metric : "cosine_similarity"
            metric for ordering
        similarity_bins : array
            if mode is "lines", the bins used to average the pairs
        cmap : string
            colormap used to show averages if mode is "lines"
        good_only : False
            keep only the pairs with a non zero accuracy (found templates)
        min_accuracy : float
            If good only, the minimum accuracy every cell should have, individually, to be
            considered in a putative pair
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, metric='cosine_similarity', similarity_bins=array([-4.00000000e-01, -2.00000000e-01, -5.55111512e-17,  2.00000000e-01,
        4.00000000e-01,  6.00000000e-01,  8.00000000e-01,  1.00000000e+00]), show_legend=False, ylim=(0.5, 1), good_only=False, min_accuracy=0.9, cmap='winter', backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudyPerformances
  Docstring:
        Plot performances over case for a study.
    
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        mode : "ordered" | "snr" | "swarm", default: "ordered"
            Which plot mode to use:
    
            * "ordered": plot performance metrics vs unit indices ordered by decreasing accuracy
            * "snr": plot performance metrics vs snr
            * "swarm": plot performance metrics as a swarm plot (see seaborn.swarmplot for details)
        performance_names : list or tuple, default: ("accuracy", "precision", "recall")
            Which performances to plot ("accuracy", "precision", "recall")
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, mode='ordered', performance_names=('accuracy', 'precision', 'recall'), case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudyRunTimesWidget
  Docstring:
        Plot sorter run times for a SorterStudy.
    
        Parameters
        ----------
        study : SorterStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudySummary
  Docstring:
        Plot a summary of a ground truth study.
        Internally this plotting function runs:
    
          * plot_study_run_times
          * plot_study_unit_counts
          * plot_study_performances
          * plot_study_agreement_matrix
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        case_keys : list or None, default: None
            A selection of cases to plot, if None, then all.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: StudyUnitCountsWidget
  Docstring:
        Plot unit counts for a study: "num_well_detected", "num_false_positive", "num_redundant", "num_overmerged"
    
        Parameters
        ----------
        study : SorterStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: TemplateMetricsWidget
  Docstring:
        Plots template metrics distributions.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get quality metrics from
        unit_ids : list or None, default: None
            List of unit ids
        include_metrics : list or None, default: None
            If given list of quality metrics to include
        skip_metrics : list or None or None, default: None
            If given, a list of quality metrics to skip
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Class: TemplateSimilarityWidget
  Docstring:
        Plots unit template similarity.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get template similarity from
        unit_ids : list or None, default: None
            List of unit ids default: None
        display_diagonal_values : bool, default: False
            If False, the diagonal is displayed as zeros.
            If True, the similarity values (all 1s) are displayed
        cmap : matplotlib colormap, default: "viridis"
            The matplotlib colormap
        show_unit_ticks : bool, default: False
            If True, ticks display unit ids
        show_colorbar : bool, default: True
            If True, color bar is displayed
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, cmap='viridis', display_diagonal_values=False, show_unit_ticks=False, show_colorbar=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: TracesWidget
  Docstring:
        Plots recording timeseries.
    
        Parameters
        ----------
        recording : RecordingExtractor, dict, or list
            The recording extractor object. If dict (or list) then it is a multi-layer display to compare, for example,
            different processing steps
        segment_index : None or int, default: None
            The segment index (required for multi-segment recordings)
        channel_ids : list or None, default: None
            The channel ids to display
        order_channel_by_depth : bool, default: False
            Reorder channel by depth
        time_range : list, tuple or None, default: None
            List with start time and end time
        mode : "line" | "map" | "auto", default: "auto"
            Three possible modes
            * "line": classical for low channel count
            * "map": for high channel count use color heat map
            * "auto": auto switch depending on the channel count ("line" if less than 64 channels, "map" otherwise)
        return_scaled : bool, default: False
            If True and the recording has scaled traces, it plots the scaled traces
        events : np.array | list[np.narray] or None, default: None
            Events to display as vertical lines.
            The numpy arrays cen either be of dtype float, with event times in seconds,
            or a structured array with the "time" field,
            and optional "duration" and "label" fields.
            For multi-segment recordings, provide a list of numpy array events, one for each segment.
        cmap : matplotlib colormap, default: "RdBu_r"
            matplotlib colormap used in mode "map"
        show_channel_ids : bool, default: False
            Set yticks with channel ids
        color_groups : bool, default: False
            If True groups are plotted with different colors
        color : str or None, default: None
            The color used to draw the traces
        clim : None, tuple or dict, default: None
            When mode is "map", this argument controls color limits.
            If dict, keys should be the same as recording keys
        scale : float, default: 1
            Scale factor for the traces
        vspacing_factor : float, default: 1.5
            Vertical spacing between channels as a multiple of maximum channel amplitude
        with_colorbar : bool, default: True
            When mode is "map", a colorbar is added
        tile_size : int, default: 1500
            For sortingview backend, the size of each tile in the rendered image
        seconds_per_row : float, default: 0.2
            For "map" mode and sortingview backend, seconds to render in each row
        add_legend : bool, default: True
            If True adds legend to figures
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
        * ephyviewer
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, segment_index=None, channel_ids=None, order_channel_by_depth=False, time_range=None, mode='auto', return_scaled=False, cmap='RdBu_r', show_channel_ids=False, events=None, events_color='gray', events_alpha=0.5, color_groups=False, color=None, clim=None, tile_size=1500, seconds_per_row=0.2, scale=1, vspacing_factor=1.5, with_colorbar=True, add_legend=True, backend=None, **backend_kwargs)
  Method: plot_ephyviewer(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitDepthsWidget
  Docstring:
        Plot unit depths
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        depth_axis : int, default: 1
            The dimension of unit_locations that is depth
        peak_sign : "neg" | "pos" | "both", default: "neg"
            Sign of peak for amplitudes
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_colors=None, depth_axis=1, peak_sign='neg', backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitLocationsWidget
  Docstring:
        Plots each unit's location.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer that must contains  "unit_locations" extension
        unit_ids : list or None, default: None
            List of unit ids
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            If True, the unit selector is not displayed (sortingview backend)
        plot_all_units : bool, default: True
            If True, all units are plotted. The unselected ones (not in unit_ids),
            are plotted in grey (matplotlib backend)
        plot_legend : bool, default: False
            If True, the legend is plotted (matplotlib backend)
        hide_axis : bool, default: False
            If True, the axis is set to off (matplotlib backend)
        margin : float, default: 50
            Amount of margin to add to plot, beyond the extremum unit locations.
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids: 'list | None' = None, with_channel_ids: 'bool' = False, unit_colors: 'dict | None' = None, hide_unit_selector: 'bool' = False, plot_all_units: 'bool' = True, plot_legend: 'bool' = False, hide_axis: 'bool' = False, backend: 'str | None' = None, margin: 'float' = 50, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitPresenceWidget
  Docstring:
        Estimates of the probability density function for each unit using Gaussian kernels,
    
        Parameters
        ----------
        sorting : SortingExtractor
            The sorting extractor object
        segment_index : None or int
            The segment index.
        time_range : list or None, default: None
            List with start time and end time
        bin_duration_s : float, default: 0.5
            Bin size (in seconds) for the heat map time axis
        smooth_sigma : float, default: 4.5
            Sigma for the Gaussian kernel (in number of bins)
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting, segment_index=None, time_range=None, bin_duration_s=0.05, smooth_sigma=4.5, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitProbeMapWidget
  Docstring:
        Plots unit map. Amplitude is color coded on probe contact.
    
        Can be static (animated=False) or animated (animated=True)
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
        unit_ids : list
            List of unit ids.
        channel_ids : list
            The channel ids to display
        animated : bool, default: False
            Animation for amplitude on time
        with_channel_ids : bool, default: False
            add channel ids text on the probe
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_ids=None, channel_ids=None, animated=None, with_channel_ids=False, colorbar=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitSummaryWidget
  Docstring:
        Plot a unit summary.
    
        If amplitudes are alreday computed, they are displayed.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_id : int or str
            The unit id to plot the summary of
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply.
            If SortingAnalyzer is already sparse, the argument is ignored
        subwidget_kwargs : dict or None, default: None
            Parameters for the subwidgets in a nested dictionary
    
                * unit_locations : UnitLocationsWidget (see UnitLocationsWidget for details)
                * unit_waveforms : UnitWaveformsWidget (see UnitWaveformsWidget for details)
                * unit_waveform_density_map : UnitWaveformDensityMapWidget (see UnitWaveformDensityMapWidget for details)
                * autocorrelograms : AutoCorrelogramsWidget (see AutoCorrelogramsWidget for details)
                * amplitudes : AmplitudesWidget (see AmplitudesWidget for details)
    
            Please note that the unit_colors should not be set in subwidget_kwargs, but directly as a parameter of plot_unit_summary.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_id, unit_colors=None, sparsity=None, subwidget_kwargs=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitTemplatesWidget
  Docstring:
        Plots unit waveforms.
    
        Parameters
        ----------
        sorting_analyzer_or_templates : SortingAnalyzer | Templates
            The SortingAnalyzer or Templates object.
            If Templates is given, the "plot_waveforms" argument is set to False
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        plot_templates : bool, default: True
            If True, templates are plotted over the waveforms
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        set_title : bool, default: True
            Create a plot title with the unit number if True
        plot_channels : bool, default: False
            Plot channel locations below traces
        unit_selected_waveforms : None or dict, default: None
            A dict key is unit_id and value is the subset of waveforms indices that should be
            be displayed (matplotlib backend)
        max_spikes_per_unit : int or None, default: 50
            If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are
            displayed per waveform, (matplotlib backend)
        scale : float, default: 1
            Scale factor for the waveforms/templates (matplotlib backend)
        widen_narrow_scale : float, default: 1
            Scale factor for the x-axis of the waveforms/templates (matplotlib backend)
        axis_equal : bool, default: False
            Equal aspect ratio for x and y axis, to visualize the array geometry to scale
        lw_waveforms : float, default: 1
            Line width for the waveforms, (matplotlib backend)
        lw_templates : float, default: 2
            Line width for the templates, (matplotlib backend)
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        alpha_waveforms : float, default: 0.5
            Alpha value for waveforms (matplotlib backend)
        alpha_templates : float, default: 1
            Alpha value for templates, (matplotlib backend)
        shade_templates : bool, default: True
            If True, templates are shaded, see templates_percentile_shading argument
        templates_percentile_shading : float, tuple/list of floats, or None, default: (1, 25, 75, 99)
            It controls the shading of the templates.
            If None, the shading is +/- the standard deviation of the templates.
            If float, it controls the percentile of the template values used to shade the templates.
            Note that it is one-sided : if 5 is given, the 5th and 95th percentiles are used to shade
            the templates. If list of floats, it needs to be have an even number of elements which control
            the lower and upper percentile used to shade the templates. The first half of the elements
            are used for the lower bounds, and the second half for the upper bounds.
            Inner elements produce darker shadings. For sortingview backend only 2 or 4 elements are
            supported.
        scalebar : bool, default: False
            Display a scale bar on the waveforms plot (matplotlib backend)
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        same_axis : bool, default: False
            If True, waveforms and templates are displayed on the same axis (matplotlib backend)
        x_offset_units : bool, default: False
            In case same_axis is True, this parameter allow to x-offset the waveforms for different units
            (recommended for a few units) (matlotlib backend)
        plot_legend : bool, default: True
            Display legend (matplotlib backend)
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, *args, **kargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitWaveformDensityMapWidget
  Docstring:
        Plots unit waveforms using heat map density.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer for calculating waveforms
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        use_max_channel : bool, default: False
            Use only the max channel
        peak_sign : "neg" | "pos" | "both", default: "neg"
            Used to detect max channel only when use_max_channel=True
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        same_axis : bool, default: False
            If True then all density are plot on the same axis and then channels is the union
            all channel per units
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, channel_ids=None, unit_ids=None, sparsity=None, same_axis=False, use_max_channel=False, peak_sign='neg', unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: UnitWaveformsWidget
  Docstring:
        Plots unit waveforms.
    
        Parameters
        ----------
        sorting_analyzer_or_templates : SortingAnalyzer | Templates
            The SortingAnalyzer or Templates object.
            If Templates is given, the "plot_waveforms" argument is set to False
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        plot_templates : bool, default: True
            If True, templates are plotted over the waveforms
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        set_title : bool, default: True
            Create a plot title with the unit number if True
        plot_channels : bool, default: False
            Plot channel locations below traces
        unit_selected_waveforms : None or dict, default: None
            A dict key is unit_id and value is the subset of waveforms indices that should be
            be displayed (matplotlib backend)
        max_spikes_per_unit : int or None, default: 50
            If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are
            displayed per waveform, (matplotlib backend)
        scale : float, default: 1
            Scale factor for the waveforms/templates (matplotlib backend)
        widen_narrow_scale : float, default: 1
            Scale factor for the x-axis of the waveforms/templates (matplotlib backend)
        axis_equal : bool, default: False
            Equal aspect ratio for x and y axis, to visualize the array geometry to scale
        lw_waveforms : float, default: 1
            Line width for the waveforms, (matplotlib backend)
        lw_templates : float, default: 2
            Line width for the templates, (matplotlib backend)
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        alpha_waveforms : float, default: 0.5
            Alpha value for waveforms (matplotlib backend)
        alpha_templates : float, default: 1
            Alpha value for templates, (matplotlib backend)
        shade_templates : bool, default: True
            If True, templates are shaded, see templates_percentile_shading argument
        templates_percentile_shading : float, tuple/list of floats, or None, default: (1, 25, 75, 99)
            It controls the shading of the templates.
            If None, the shading is +/- the standard deviation of the templates.
            If float, it controls the percentile of the template values used to shade the templates.
            Note that it is one-sided : if 5 is given, the 5th and 95th percentiles are used to shade
            the templates. If list of floats, it needs to be have an even number of elements which control
            the lower and upper percentile used to shade the templates. The first half of the elements
            are used for the lower bounds, and the second half for the upper bounds.
            Inner elements produce darker shadings. For sortingview backend only 2 or 4 elements are
            supported.
        scalebar : bool, default: False
            Display a scale bar on the waveforms plot (matplotlib backend)
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        same_axis : bool, default: False
            If True, waveforms and templates are displayed on the same axis (matplotlib backend)
        x_offset_units : bool, default: False
            In case same_axis is True, this parameter allow to x-offset the waveforms for different units
            (recommended for a few units) (matlotlib backend)
        plot_legend : bool, default: True
            Display legend (matplotlib backend)
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer_or_templates: 'SortingAnalyzer | Templates', channel_ids=None, unit_ids=None, plot_waveforms=True, plot_templates=True, plot_channels=False, unit_colors=None, sparsity=None, ncols=5, scale=1, widen_narrow_scale=1, lw_waveforms=1, lw_templates=2, axis_equal=False, unit_selected_waveforms=None, max_spikes_per_unit=50, set_title=True, same_axis=False, shade_templates=True, templates_percentile_shading=(1, 25, 75, 99), scalebar=False, x_offset_units=False, alpha_waveforms=0.5, alpha_templates=1, hide_unit_selector=False, plot_legend=True, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Function: array_to_image(data, colormap='RdGy', clim=None, spatial_zoom=(0.75, 1.25), num_timepoints_per_row=30000, row_spacing=0.25, scalebar=False, sampling_frequency=None)
  Docstring:
    Converts a 2D numpy array (width x height) to a
    3D image array (width x height x RGB color).
    
    Useful for visualizing data before/after preprocessing
    
    Parameters
    ----------
    data : np.array
        2D numpy array
    colormap : str
        Identifier for a Matplotlib colormap
    clim : tuple or None
        The color limits. If None, the clim is the range of the traces
    spatial_zoom : tuple
        Tuple specifying width & height scaling
    num_timepoints_per_row : int
        Max number of samples before wrapping
    row_spacing : float
        Ratio of row spacing to overall channel height
    
    Returns
    -------
    output_image : 3D numpy array

Function: get_default_plotter_backend()
  Docstring:
    Return the default backend for spikeinterface widgets.
    The default backend is "matplotlib" at init.
    It can be be globally set with `set_default_plotter_backend(backend)`

Function: get_some_colors(keys, color_engine='auto', map_name='gist_ncar', format='RGBA', shuffle=None, seed=None, margin=None, resample=True)
  Docstring:
    Return a dict of colors for given keys
    
    Parameters
    ----------
    color_engine : "auto" | "matplotlib" | "colorsys" | "distinctipy", default: "auto"
        The engine to generate colors
    map_name : str
        Used for matplotlib
    format: str, default: "RGBA"
        The output formats
    shuffle : bool or None, default: None
        Shuffle or not the colors.
        If None then:
        * set to True for matplotlib and colorsys
        * set to False for distinctipy
    seed: int or None, default: None
        Set the seed
    margin: None or int
        If None, put a margin to remove colors on borders of some colomap of matplotlib.
    resample : bool, dafult True
        For matplotlib, only resample the cmap to the number of keys + eventualy maring
    
    Returns
    -------
    dict_colors: dict
        A dict of colors for given keys.

Function: get_unit_colors(sorting_or_analyzer_or_templates, color_engine='auto', map_name='gist_ncar', format='RGBA', shuffle=None, seed=None)
  Docstring:
    Return a dict colors per units.

Class: plot_agreement_matrix
  Docstring:
        Plots sorting comparison agreement matrix.
    
        Parameters
        ----------
        sorting_comparison : GroundTruthComparison or SymmetricSortingComparison
            The sorting comparison object.
            Can optionally be symmetric if given a SymmetricSortingComparison
        ordered : bool, default: True
            Order units with best agreement scores.
            If True, agreement scores can be seen along a diagonal
        count_text : bool, default: True
            If True counts are displayed as text
        unit_ticks : bool, default: True
            If True unit tick labels are displayed
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_comparison, ordered=True, count_text=True, unit_ticks=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_all_amplitudes_distributions
  Docstring:
        Plots distributions of amplitudes as violin plot for all or some units.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer
        unit_ids : list
            List of unit ids, default None
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_amplitudes
  Docstring:
        Plots spike amplitudes
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input waveform extractor
        unit_ids : list or None, default: None
            List of unit ids
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: None
            Number of max spikes per unit to display. Use None for all spikes
        y_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the amplitudes).
        scatter_decimate : int, default: 1
            If equal to n, each nth spike is kept for plotting.
        hide_unit_selector : bool, default: False
            If True the unit selector is not displayed
            (sortingview backend)
        plot_histogram : bool, default: False
            If True, an histogram of the amplitudes is plotted on the right axis
            (matplotlib backend)
        bins : int or None, default: None
            If plot_histogram is True, the number of bins for the amplitude histogram.
            If None, uses 100 bins.
        plot_legend : bool, default: True
            True includes legend in plot
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, segment_index=None, max_spikes_per_unit=None, y_lim=None, scatter_decimate=1, hide_unit_selector=False, plot_histograms=False, bins=None, plot_legend=True, backend=None, **backend_kwargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_autocorrelograms
  Docstring:
        Plots unit cross correlograms.
    
        Parameters
        ----------
        sorting_analyzer_or_sorting : SortingAnalyzer or BaseSorting
            The object to compute/get crosscorrelograms from
        unit_ids  list or None, default: None
            List of unit ids
        min_similarity_for_correlograms : float, default: 0.2
            For sortingview backend. Threshold for computing pair-wise cross-correlograms.
            If template similarity between two units is below this threshold, the cross-correlogram is not displayed.
            For auto-correlograms plot, this is automatically set to None.
        window_ms : float, default: 100.0
            Window for CCGs in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        bin_ms : float, default: 1.0
            Bin size in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, *args, **kargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_comparison_collision_by_similarity
  Docstring:
        Plots CollisionGTComparison pair by pair orderer by cosine_similarity
    
        Parameters
        ----------
        comp : CollisionGTComparison
            The collision ground truth comparison object
        templates : array
            template of units
        mode : "heatmap" or "lines"
            to see collision curves for every pairs ("heatmap") or as lines averaged over pairs.
        similarity_bins : array
            if mode is "lines", the bins used to average the pairs
        cmap : string
            colormap used to show averages if mode is "lines"
        metric : "cosine_similarity"
            metric for ordering
        good_only : True
            keep only the pairs with a non zero accuracy (found templates)
        min_accuracy : float
            If good only, the minimum accuracy every cell should have, individually, to be
            considered in a putative pair
        unit_ids : list
            List of considered units
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, comp, templates, unit_ids=None, metric='cosine_similarity', figure=None, ax=None, mode='heatmap', similarity_bins=array([-4.00000000e-01, -2.00000000e-01, -5.55111512e-17,  2.00000000e-01,
        4.00000000e-01,  6.00000000e-01,  8.00000000e-01,  1.00000000e+00]), cmap='winter', good_only=False, min_accuracy=0.9, show_legend=False, ylim=(0, 1), backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_confusion_matrix
  Docstring:
        Plots sorting comparison confusion matrix.
    
        Parameters
        ----------
        gt_comparison : GroundTruthComparison
            The ground truth sorting comparison object
        count_text : bool
            If True counts are displayed as text
        unit_ticks : bool
            If True unit tick labels are displayed
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, gt_comparison, count_text=True, unit_ticks=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_crosscorrelograms
  Docstring:
        Plots unit cross correlograms.
    
        Parameters
        ----------
        sorting_analyzer_or_sorting : SortingAnalyzer or BaseSorting
            The object to compute/get crosscorrelograms from
        unit_ids  list or None, default: None
            List of unit ids
        min_similarity_for_correlograms : float, default: 0.2
            For sortingview backend. Threshold for computing pair-wise cross-correlograms.
            If template similarity between two units is below this threshold, the cross-correlogram is not displayed.
            For auto-correlograms plot, this is automatically set to None.
        window_ms : float, default: 100.0
            Window for CCGs in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        bin_ms : float, default: 1.0
            Bin size in ms. If correlograms are already computed (e.g. with SortingAnalyzer),
            this argument is ignored
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer_or_sorting: 'Union[SortingAnalyzer, BaseSorting]', unit_ids=None, min_similarity_for_correlograms=0.2, window_ms=100.0, bin_ms=1.0, hide_unit_selector=False, unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_drift_raster_map
  Docstring:
        Plot the drift raster map from peaks or a SortingAnalyzer.
        The drift raster map is a scatter plot of the estimated peak depth vs time and it is
        useful to visualize the drift over the course of the recording.
    
        Parameters
        ----------
        peaks : np.array | None, default: None
            The peaks array, with dtype ("sample_index", "channel_index", "amplitude", "segment_index"),
            as returned by the `detect_peaks` or `correct_motion` functions.
        peak_locations : np.array | None, default: None
            The peak locations, with dtype ("x", "y") or ("x", "y", "z"), as returned by the
            `localize_peaks` or `correct_motion` functions.
        sorting_analyzer : SortingAnalyzer | None, default: None
            The sorting analyzer object. To use this function, the `SortingAnalyzer` must have the
            "spike_locations" extension computed.
        direction : "x" or "y", default: "y"
            The direction to display. "y" is the depth direction.
        segment_index : int, default: None
            The segment index to display.
        recording : RecordingExtractor | None, default: None
            The recording extractor object (only used to get "real" times).
        segment_index : int, default: 0
            The segment index to display.
        sampling_frequency : float, default: None
            The sampling frequency (needed if recording is None).
        depth_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the recording).
        scatter_decimate : int, default: None
            If equal to n, each nth spike is kept for plotting.
        color_amplitude : bool, default: True
            If True, the color of the scatter points is the amplitude of the peaks.
        cmap : str, default: "inferno"
            The colormap to use for the amplitude.
        color : str, default: "Gray"
            The color of the scatter points if color_amplitude is False.
        clim : tuple or None, default: None
            The min and max amplitude to display, if None (min and max of the amplitudes).
        alpha : float, default: 1
            The alpha of the scatter points.
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, peaks: 'np.array | None' = None, peak_locations: 'np.array | None' = None, sorting_analyzer: 'SortingAnalyzer | None' = None, direction: 'str' = 'y', recording: 'BaseRecording | None' = None, sampling_frequency: 'float | None' = None, segment_index: 'int | None' = None, depth_lim: 'tuple[float, float] | None' = None, color_amplitude: 'bool' = True, scatter_decimate: 'int | None' = None, cmap: 'str' = 'inferno', color: 'str' = 'Gray', clim: 'tuple[float, float] | None' = None, alpha: 'float' = 1, backend: 'str | None' = None, **backend_kwargs)

Class: plot_isi_distribution
  Docstring:
        Plots spike train ISI distribution.
    
        Parameters
        ----------
        sorting : SortingExtractor
            The sorting extractor object
        unit_ids : list
            List of unit ids
        bins_ms : int
            Bin size in ms
        window_ms : float
            Window size in ms
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting, unit_ids=None, window_ms=100.0, bin_ms=1.0, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_locations
  Docstring:
        Plots spike locations as a function of time
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input sorting analyzer
        unit_ids : list or None, default: None
            List of unit ids
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: None
            Number of max spikes per unit to display. Use None for all spikes
        plot_histogram : bool, default: False
            If True, an histogram of the locations is plotted on the right axis
            (matplotlib backend)
        bins : int or None, default: None
            If plot_histogram is True, the number of bins for the location histogram.
            If None this is automatically adjusted
        plot_legend : bool, default: True
            True includes legend in plot
        locations_axis : str, default: 'y'
            Which location axis to use when plotting locations.
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, unit_colors=None, segment_index=None, max_spikes_per_unit=None, plot_histograms=False, bins=None, plot_legend=True, locations_axis='y', backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_motion
  Docstring:
        Plot the Motion object.
    
        Parameters
        ----------
        motion : Motion
            The motion object.
        segment_index : int | None, default: None
            If Motion is multi segment, the must be not None.
        mode : "auto" | "line" | "map", default: "line"
            How to plot the motion.
            "line" plots estimated motion at different depths as lines.
            "map" plots estimated motion at different depths as a heatmap.
            "auto" makes it automatic depending on the number of motion depths.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, motion: 'Motion', segment_index: 'int | None' = None, mode: 'str' = 'line', motion_lim: 'float | None' = None, backend: 'str | None' = None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_motion_info
  Docstring:
        Plot motion information from the motion_info dictionary returned by the `correct_motion()` funciton.
        This widget plots:
            * the motion iself
            * the drift raster map (peak depth vs time) before correction
            * the drift raster map (peak depth vs time) after correction
    
        Parameters
        ----------
        motion_info : dict
            The motion info returned by correct_motion() or loaded back with load_motion_info().
        recording : RecordingExtractor
            The recording extractor object
        segment_index : int, default: None
            The segment index to display.
        sampling_frequency : float, default: None
            The sampling frequency (needed if recording is None).
        depth_lim : tuple or None, default: None
            The min and max depth to display, if None (min and max of the recording).
        motion_lim : tuple or None, default: None
            The min and max motion to display, if None (min and max of the motion).
        scatter_decimate : int, default: None
            If equal to n, each nth spike is kept for plotting.
        color_amplitude : bool, default: False
            If True, the color of the scatter points is the amplitude of the peaks.
        amplitude_cmap : str, default: "inferno"
            The colormap to use for the amplitude.
        amplitude_color : str, default: "Gray"
            The color of the scatter points if color_amplitude is False.
        amplitude_clim : tuple or None, default: None
            The min and max amplitude to display, if None (min and max of the amplitudes).
        amplitude_alpha : float, default: 1
            The alpha of the scatter points.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, motion_info: 'dict', recording: 'BaseRecording', segment_index: 'int | None' = None, depth_lim: 'tuple[float, float] | None' = None, motion_lim: 'tuple[float, float] | None' = None, color_amplitude: 'bool' = False, scatter_decimate: 'int | None' = None, amplitude_cmap: 'str' = 'inferno', amplitude_color: 'str' = 'Gray', amplitude_clim: 'tuple[float, float] | None' = None, amplitude_alpha: 'float' = 1, backend: 'str | None' = None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_multicomparison_agreement
  Docstring:
        Plots multi comparison agreement as pie or bar plot.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        plot_type : "pie" | "bar", default: "pie"
            The plot type
        cmap : matplotlib colormap, default: "YlOrRd"
            The colormap to be used for the nodes
        fontsize : int, default: 9
            The text fontsize
        show_legend : bool, default: True
            If True a legend is shown
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, plot_type='pie', cmap='YlOrRd', fontsize=9, show_legend=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_multicomparison_agreement_by_sorter
  Docstring:
        Plots multi comparison agreement as pie or bar plot.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        plot_type : "pie" | "bar", default: "pie
            The plot type
        cmap : matplotlib colormap, default: "Reds"
            The colormap to be used for the nodes
        fontsize : int, default: 9
            The text fontsize
        show_legend : bool
            Show the legend in the last axes
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, plot_type='pie', cmap='YlOrRd', fontsize=9, show_legend=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_multicomparison_graph
  Docstring:
        Plots multi comparison graph.
    
        Parameters
        ----------
        multi_comparison : BaseMultiComparison
            The multi comparison object
        draw_labels : bool, default: False
            If True unit labels are shown
        node_cmap : matplotlib colormap, default: "viridis"
            The colormap to be used for the nodes
        edge_cmap : matplotlib colormap, default: "hot"
            The colormap to be used for the edges
        alpha_edges : float, default: 0.5
            Alpha value for edges
        colorbar : bool, default: False
            If True a colorbar for the edges is plotted
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, multi_comparison, draw_labels=False, node_cmap='viridis', edge_cmap='hot', alpha_edges=0.5, colorbar=False, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_peak_activity
  Docstring:
        Plots spike rate (estimated with detect_peaks()) as 2D activity map.
    
        Can be static (bin_duration_s=None) or animated (bin_duration_s=60.)
    
        Parameters
        ----------
        recording : RecordingExtractor
            The recording extractor object.
        peaks : numpy array with peak_dtype
            The pre detected peaks (with the `detect_peaks()` function).
        bin_duration_s : None or float, default: None
            If None then static image
            If not None then it is an animation per bin.
        with_contact_color : bool, default: True
            Plot rates with contact colors
        with_interpolated_map : bool, default: True
            Plot rates with interpolated map
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        color_range : tuple | list | None, default: None
            Sets the color bar range when animating or plotting.
            When None, uses the min-max of the entire time-series via imshow defaults.
            If tuple/list, the length must be 2 representing the range.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, peaks, bin_duration_s=None, with_contact_color=True, with_interpolated_map=True, with_channel_ids=False, with_color_bar=True, color_range=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_peaks_on_probe
  Docstring:
        Generate a plot of spike peaks showing their location on a plot
        of the probe. Color scaling represents spike amplitude.
    
        The generated plot overlays the estimated position of a spike peak
        (as a single point for each peak) onto a plot of the probe. The
        dimensions of the plot are x axis: probe width, y axis: probe depth.
    
        Plots of different sets of peaks can be created on subplots, by
        passing a list of peaks and corresponding peak locations.
    
        Parameters
        ----------
        recording : Recording
            A SpikeInterface recording object.
        peaks : np.array | list[np.ndarray]
            SpikeInterface 'peaks' array created with `detect_peaks()`,
            an array of length num_peaks with entries:
                (sample_index, channel_index, amplitude, segment_index)
            To plot different sets of peaks in subplots, pass a list of peaks, each
            with a corresponding entry in a list passed to `peak_locations`.
        peak_locations : np.array | list[np.ndarray]
            A SpikeInterface 'peak_locations' array created with `localize_peaks()`.
            an array of length num_peaks with entries: (x, y)
            To plot multiple peaks in subplots, pass a list of `peak_locations`
            here with each entry having a corresponding `peaks`.
        segment_index : None | int, default: None
            If set, only peaks from this recording segment will be used.
        time_range : None | Tuple, default: None
            The time period over which to include peaks. If `None`, peaks
            across the entire recording will be shown.
        ylim : None | Tuple, default: None
            The y-axis limits (i.e. the probe depth). If `None`, the entire
            probe will be displayed.
        decimate : int, default: 5
            For performance reasons, every nth peak is shown on the plot,
            where n is set by decimate. To plot all peaks, set `decimate=1`.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, peaks, peak_locations, segment_index=None, time_range=None, ylim=None, decimate=5, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_potential_merges
  Docstring:
        Plots potential merges
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The input sorting analyzer
        potential_merges : list of lists or tuples
            List of potential merges (see `spikeinterface.curation.get_potential_auto_merges`)
        segment_index : int
            The segment index to display
        max_spike_samples : int or None, default: None
            The maximum number of spikes to display per unit
        backend: str
        
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', potential_merges: 'list', unit_colors: 'list' = None, segment_index: 'int' = 0, max_spikes_per_unit: 'int' = 100, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_probe_map
  Docstring:
        Plot the probe of a recording.
    
        Parameters
        ----------
        recording : RecordingExtractor
            The recording extractor object
        color_channels : list or matplotlib color
            List of colors to be associated with each channel_id, if only one color is present all channels will take the specified color
        with_channel_ids : bool False default
            Add channel ids text on the probe
        **plot_probe_kwargs : keyword arguments for probeinterface.plotting.plot_probe_group() function
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, color_channels=None, with_channel_ids=False, backend=None, **backend_or_plot_probe_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_quality_metrics
  Docstring:
        Plots quality metrics distributions.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get quality metrics from
        unit_ids : list or None, default: None
            List of unit ids
        include_metrics : list or None, default: None
            If given, a list of quality metrics to include
        skip_metrics : list or None, default: None
            If given, a list of quality metrics to skip
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Class: plot_rasters
  Docstring:
        Plots spike train rasters.
    
        Parameters
        ----------
        sorting : SortingExtractor | None, default: None
            A sorting object
        sorting_analyzer : SortingAnalyzer  | None, default: None
            A sorting analyzer object
        segment_index : None or int
            The segment index.
        unit_ids : list
            List of unit ids
        time_range : list
            List with start time and end time
        color : matplotlib color
            The color to be used
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting=None, sorting_analyzer=None, segment_index=None, unit_ids=None, time_range=None, color='k', backend=None, **backend_kwargs)

Class: plot_sorting_summary
  Docstring:
        Plots spike sorting summary.
        This is the main viewer to visualize the final result with several sub view.
        This use sortingview (in a web browser) or spikeinterface-gui (with Qt).
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_ids : list or None, default: None
            List of unit ids
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        max_amplitudes_per_unit : int or None, default: None
            Maximum number of spikes per unit for plotting amplitudes.
            If None, all spikes are plotted
        min_similarity_for_correlograms : float, default: 0.2
            Threshold for computing pair-wise cross-correlograms. If template similarity between two units
            is below this threshold, the cross-correlogram is not computed
            (sortingview backend)
        curation : bool, default: False
            If True, manual curation is enabled
            (sortingview backend)
        label_choices : list or None, default: None
            List of labels to be added to the curation table
            (sortingview backend)
        displayed_unit_properties : list or None, default: None
            List of properties to be added to the unit table.
            These may be drawn from the sorting extractor, and, if available,
            the quality_metrics/template_metrics/unit_locations extensions of the SortingAnalyzer.
            See all properties available with sorting.get_property_keys(), and, if available,
            analyzer.get_extension("quality_metrics").get_data().columns and
            analyzer.get_extension("template_metrics").get_data().columns.
        extra_unit_properties : dict or None, default: None
            A dict with extra units properties to display.
            The key is the property name and the value must be a numpy.array.
        curation_dict : dict or None, default: None
            When curation is True, optionaly the viewer can get a previous 'curation_dict'
            to continue/check  previous curations on this analyzer.
            In this case label_definitions must be None beacuse it is already included in the curation_dict.
            (spikeinterface_gui backend)
        label_definitions : dict or None, default: None
            When curation is True, optionaly the user can provide a label_definitions dict.
            This replaces the label_choices in the curation_format.
            (spikeinterface_gui backend)
        backend: str
        
        * sortingview
        * spikeinterface_gui
    
    **backend_kwargs: kwargs
        
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, sparsity=None, max_amplitudes_per_unit=None, min_similarity_for_correlograms=0.2, curation=False, displayed_unit_properties=None, extra_unit_properties=None, label_choices=None, curation_dict=None, label_definitions=None, backend=None, unit_table_properties=None, **backend_kwargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_spikeinterface_gui(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_spike_locations
  Docstring:
        Plots spike locations.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get spike locations from
        unit_ids : list or None, default: None
            List of unit ids
        segment_index : int or None, default: None
            The segment index (or None if mono-segment)
        max_spikes_per_unit : int or None, default: 500
            Number of max spikes per unit to display. Use None for all spikes.
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        plot_all_units : bool, default: True
            If True, all units are plotted. The unselected ones (not in unit_ids),
            are plotted in grey (matplotlib backend)
        plot_legend : bool, default: False
            If True, the legend is plotted (matplotlib backend)
        hide_axis : bool, default: False
            If True, the axis is set to off (matplotlib backend)
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, segment_index=None, max_spikes_per_unit=500, with_channel_ids=False, unit_colors=None, hide_unit_selector=False, plot_all_units=True, plot_legend=False, hide_axis=False, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_spikes_on_traces
  Docstring:
        Plots unit spikes/waveforms over traces.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        order_channel_by_depth : bool, default: False
            If true orders channel by depth
        time_range : list or None, default: None
            List with start time and end time in seconds
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        mode : "line" | "map" | "auto", default: "auto"
            * "line": classical for low channel count
            * "map": for high channel count use color heat map
            * "auto": auto switch depending on the channel count ("line" if less than 64 channels, "map" otherwise)
        return_scaled : bool, default: False
            If True and the recording has scaled traces, it plots the scaled traces
        cmap : str, default: "RdBu"
            matplotlib colormap used in mode "map"
        show_channel_ids : bool, default: False
            Set yticks with channel ids
        color_groups : bool, default: False
            If True groups are plotted with different colors
        color : str or None, default: None
            The color used to draw the traces
        clim : None, tuple or dict, default: None
            When mode is "map", this argument controls color limits.
            If dict, keys should be the same as recording keys
        scale : float, default: 1
            Scale factor for the traces
        with_colorbar : bool, default: True
            When mode is "map", a colorbar is added
        tile_size : int, default: 512
            For sortingview backend, the size of each tile in the rendered image
        seconds_per_row : float, default: 0.2
            For "map" mode and sortingview backend, seconds to render in each row
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', segment_index=None, channel_ids=None, unit_ids=None, order_channel_by_depth=False, time_range=None, unit_colors=None, sparsity=None, mode='auto', return_scaled=False, cmap='RdBu', show_channel_ids=False, color_groups=False, color=None, clim=None, tile_size=512, seconds_per_row=0.2, scale=1, spike_width_ms=4, spike_height_um=20, with_colorbar=True, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_agreement_matrix
  Docstring:
        Plot agreement matrix.
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        ordered : bool
            Order units with best agreement scores.
            This enable to see agreement on a diagonal.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, ordered=True, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_comparison_collision_by_similarity
  Docstring:
        Plots CollisionGTComparison pair by pair orderer by cosine_similarity for all
        cases in a study.
    
        Parameters
        ----------
        study : CollisionGTStudy
            The collision study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        metric : "cosine_similarity"
            metric for ordering
        similarity_bins : array
            if mode is "lines", the bins used to average the pairs
        cmap : string
            colormap used to show averages if mode is "lines"
        good_only : False
            keep only the pairs with a non zero accuracy (found templates)
        min_accuracy : float
            If good only, the minimum accuracy every cell should have, individually, to be
            considered in a putative pair
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, metric='cosine_similarity', similarity_bins=array([-4.00000000e-01, -2.00000000e-01, -5.55111512e-17,  2.00000000e-01,
        4.00000000e-01,  6.00000000e-01,  8.00000000e-01,  1.00000000e+00]), show_legend=False, ylim=(0.5, 1), good_only=False, min_accuracy=0.9, cmap='winter', backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_performances
  Docstring:
        Plot performances over case for a study.
    
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        mode : "ordered" | "snr" | "swarm", default: "ordered"
            Which plot mode to use:
    
            * "ordered": plot performance metrics vs unit indices ordered by decreasing accuracy
            * "snr": plot performance metrics vs snr
            * "swarm": plot performance metrics as a swarm plot (see seaborn.swarmplot for details)
        performance_names : list or tuple, default: ("accuracy", "precision", "recall")
            Which performances to plot ("accuracy", "precision", "recall")
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, mode='ordered', performance_names=('accuracy', 'precision', 'recall'), case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_run_times
  Docstring:
        Plot sorter run times for a SorterStudy.
    
        Parameters
        ----------
        study : SorterStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_summary
  Docstring:
        Plot a summary of a ground truth study.
        Internally this plotting function runs:
    
          * plot_study_run_times
          * plot_study_unit_counts
          * plot_study_performances
          * plot_study_agreement_matrix
    
        Parameters
        ----------
        study : GroundTruthStudy
            A study object.
        case_keys : list or None, default: None
            A selection of cases to plot, if None, then all.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_study_unit_counts
  Docstring:
        Plot unit counts for a study: "num_well_detected", "num_false_positive", "num_redundant", "num_overmerged"
    
        Parameters
        ----------
        study : SorterStudy
            A study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
    
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_template_metrics
  Docstring:
        Plots template metrics distributions.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get quality metrics from
        unit_ids : list or None, default: None
            List of unit ids
        include_metrics : list or None, default: None
            If given list of quality metrics to include
        skip_metrics : list or None or None, default: None
            If given, a list of quality metrics to skip
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, include_metrics=None, skip_metrics=None, unit_colors=None, hide_unit_selector=False, backend=None, **backend_kwargs)

Class: plot_template_similarity
  Docstring:
        Plots unit template similarity.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The object to get template similarity from
        unit_ids : list or None, default: None
            List of unit ids default: None
        display_diagonal_values : bool, default: False
            If False, the diagonal is displayed as zeros.
            If True, the similarity values (all 1s) are displayed
        cmap : matplotlib colormap, default: "viridis"
            The matplotlib colormap
        show_unit_ticks : bool, default: False
            If True, ticks display unit ids
        show_colorbar : bool, default: True
            If True, color bar is displayed
        backend: str
        
        * matplotlib
        * sortingview
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids=None, cmap='viridis', display_diagonal_values=False, show_unit_ticks=False, show_colorbar=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Function: plot_timeseries(*args, **kwargs)
  Docstring:
    None

Class: plot_traces
  Docstring:
        Plots recording timeseries.
    
        Parameters
        ----------
        recording : RecordingExtractor, dict, or list
            The recording extractor object. If dict (or list) then it is a multi-layer display to compare, for example,
            different processing steps
        segment_index : None or int, default: None
            The segment index (required for multi-segment recordings)
        channel_ids : list or None, default: None
            The channel ids to display
        order_channel_by_depth : bool, default: False
            Reorder channel by depth
        time_range : list, tuple or None, default: None
            List with start time and end time
        mode : "line" | "map" | "auto", default: "auto"
            Three possible modes
            * "line": classical for low channel count
            * "map": for high channel count use color heat map
            * "auto": auto switch depending on the channel count ("line" if less than 64 channels, "map" otherwise)
        return_scaled : bool, default: False
            If True and the recording has scaled traces, it plots the scaled traces
        events : np.array | list[np.narray] or None, default: None
            Events to display as vertical lines.
            The numpy arrays cen either be of dtype float, with event times in seconds,
            or a structured array with the "time" field,
            and optional "duration" and "label" fields.
            For multi-segment recordings, provide a list of numpy array events, one for each segment.
        cmap : matplotlib colormap, default: "RdBu_r"
            matplotlib colormap used in mode "map"
        show_channel_ids : bool, default: False
            Set yticks with channel ids
        color_groups : bool, default: False
            If True groups are plotted with different colors
        color : str or None, default: None
            The color used to draw the traces
        clim : None, tuple or dict, default: None
            When mode is "map", this argument controls color limits.
            If dict, keys should be the same as recording keys
        scale : float, default: 1
            Scale factor for the traces
        vspacing_factor : float, default: 1.5
            Vertical spacing between channels as a multiple of maximum channel amplitude
        with_colorbar : bool, default: True
            When mode is "map", a colorbar is added
        tile_size : int, default: 1500
            For sortingview backend, the size of each tile in the rendered image
        seconds_per_row : float, default: 0.2
            For "map" mode and sortingview backend, seconds to render in each row
        add_legend : bool, default: True
            If True adds legend to figures
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
        * ephyviewer
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, recording, segment_index=None, channel_ids=None, order_channel_by_depth=False, time_range=None, mode='auto', return_scaled=False, cmap='RdBu_r', show_channel_ids=False, events=None, events_color='gray', events_alpha=0.5, color_groups=False, color=None, clim=None, tile_size=1500, seconds_per_row=0.2, scale=1, vspacing_factor=1.5, with_colorbar=True, add_legend=True, backend=None, **backend_kwargs)
  Method: plot_ephyviewer(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_depths
  Docstring:
        Plot unit depths
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        depth_axis : int, default: 1
            The dimension of unit_locations that is depth
        peak_sign : "neg" | "pos" | "both", default: "neg"
            Sign of peak for amplitudes
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_colors=None, depth_axis=1, peak_sign='neg', backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_locations
  Docstring:
        Plots each unit's location.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer that must contains  "unit_locations" extension
        unit_ids : list or None, default: None
            List of unit ids
        with_channel_ids : bool, default: False
            Add channel ids text on the probe
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        hide_unit_selector : bool, default: False
            If True, the unit selector is not displayed (sortingview backend)
        plot_all_units : bool, default: True
            If True, all units are plotted. The unselected ones (not in unit_ids),
            are plotted in grey (matplotlib backend)
        plot_legend : bool, default: False
            If True, the legend is plotted (matplotlib backend)
        hide_axis : bool, default: False
            If True, the axis is set to off (matplotlib backend)
        margin : float, default: 50
            Amount of margin to add to plot, beyond the extremum unit locations.
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer: 'SortingAnalyzer', unit_ids: 'list | None' = None, with_channel_ids: 'bool' = False, unit_colors: 'dict | None' = None, hide_unit_selector: 'bool' = False, plot_all_units: 'bool' = True, plot_legend: 'bool' = False, hide_axis: 'bool' = False, backend: 'str | None' = None, margin: 'float' = 50, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_presence
  Docstring:
        Estimates of the probability density function for each unit using Gaussian kernels,
    
        Parameters
        ----------
        sorting : SortingExtractor
            The sorting extractor object
        segment_index : None or int
            The segment index.
        time_range : list or None, default: None
            List with start time and end time
        bin_duration_s : float, default: 0.5
            Bin size (in seconds) for the heat map time axis
        smooth_sigma : float, default: 4.5
            Sigma for the Gaussian kernel (in number of bins)
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting, segment_index=None, time_range=None, bin_duration_s=0.05, smooth_sigma=4.5, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_probe_map
  Docstring:
        Plots unit map. Amplitude is color coded on probe contact.
    
        Can be static (animated=False) or animated (animated=True)
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
        unit_ids : list
            List of unit ids.
        channel_ids : list
            The channel ids to display
        animated : bool, default: False
            Animation for amplitude on time
        with_channel_ids : bool, default: False
            add channel ids text on the probe
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_ids=None, channel_ids=None, animated=None, with_channel_ids=False, colorbar=True, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_summary
  Docstring:
        Plot a unit summary.
    
        If amplitudes are alreday computed, they are displayed.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer object
        unit_id : int or str
            The unit id to plot the summary of
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply.
            If SortingAnalyzer is already sparse, the argument is ignored
        subwidget_kwargs : dict or None, default: None
            Parameters for the subwidgets in a nested dictionary
    
                * unit_locations : UnitLocationsWidget (see UnitLocationsWidget for details)
                * unit_waveforms : UnitWaveformsWidget (see UnitWaveformsWidget for details)
                * unit_waveform_density_map : UnitWaveformDensityMapWidget (see UnitWaveformDensityMapWidget for details)
                * autocorrelograms : AutoCorrelogramsWidget (see AutoCorrelogramsWidget for details)
                * amplitudes : AmplitudesWidget (see AmplitudesWidget for details)
    
            Please note that the unit_colors should not be set in subwidget_kwargs, but directly as a parameter of plot_unit_summary.
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, unit_id, unit_colors=None, sparsity=None, subwidget_kwargs=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_templates
  Docstring:
        Plots unit waveforms.
    
        Parameters
        ----------
        sorting_analyzer_or_templates : SortingAnalyzer | Templates
            The SortingAnalyzer or Templates object.
            If Templates is given, the "plot_waveforms" argument is set to False
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        plot_templates : bool, default: True
            If True, templates are plotted over the waveforms
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        set_title : bool, default: True
            Create a plot title with the unit number if True
        plot_channels : bool, default: False
            Plot channel locations below traces
        unit_selected_waveforms : None or dict, default: None
            A dict key is unit_id and value is the subset of waveforms indices that should be
            be displayed (matplotlib backend)
        max_spikes_per_unit : int or None, default: 50
            If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are
            displayed per waveform, (matplotlib backend)
        scale : float, default: 1
            Scale factor for the waveforms/templates (matplotlib backend)
        widen_narrow_scale : float, default: 1
            Scale factor for the x-axis of the waveforms/templates (matplotlib backend)
        axis_equal : bool, default: False
            Equal aspect ratio for x and y axis, to visualize the array geometry to scale
        lw_waveforms : float, default: 1
            Line width for the waveforms, (matplotlib backend)
        lw_templates : float, default: 2
            Line width for the templates, (matplotlib backend)
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        alpha_waveforms : float, default: 0.5
            Alpha value for waveforms (matplotlib backend)
        alpha_templates : float, default: 1
            Alpha value for templates, (matplotlib backend)
        shade_templates : bool, default: True
            If True, templates are shaded, see templates_percentile_shading argument
        templates_percentile_shading : float, tuple/list of floats, or None, default: (1, 25, 75, 99)
            It controls the shading of the templates.
            If None, the shading is +/- the standard deviation of the templates.
            If float, it controls the percentile of the template values used to shade the templates.
            Note that it is one-sided : if 5 is given, the 5th and 95th percentiles are used to shade
            the templates. If list of floats, it needs to be have an even number of elements which control
            the lower and upper percentile used to shade the templates. The first half of the elements
            are used for the lower bounds, and the second half for the upper bounds.
            Inner elements produce darker shadings. For sortingview backend only 2 or 4 elements are
            supported.
        scalebar : bool, default: False
            Display a scale bar on the waveforms plot (matplotlib backend)
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        same_axis : bool, default: False
            If True, waveforms and templates are displayed on the same axis (matplotlib backend)
        x_offset_units : bool, default: False
            In case same_axis is True, this parameter allow to x-offset the waveforms for different units
            (recommended for a few units) (matlotlib backend)
        plot_legend : bool, default: True
            Display legend (matplotlib backend)
        backend: str
        
        * matplotlib
        * sortingview
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * sortingview:
    
            * generate_url: If True, the figurl URL is generated and printed, default: True
            * display: If True and in jupyter notebook/lab, the widget is displayed in the cell, default: True.
            * figlabel: The figurl figure label, default: None
            * height: The height of the sortingview View in jupyter, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, *args, **kargs)
  Method: plot_sortingview(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_waveforms
  Docstring:
        Plots unit waveforms.
    
        Parameters
        ----------
        sorting_analyzer_or_templates : SortingAnalyzer | Templates
            The SortingAnalyzer or Templates object.
            If Templates is given, the "plot_waveforms" argument is set to False
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        plot_templates : bool, default: True
            If True, templates are plotted over the waveforms
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        set_title : bool, default: True
            Create a plot title with the unit number if True
        plot_channels : bool, default: False
            Plot channel locations below traces
        unit_selected_waveforms : None or dict, default: None
            A dict key is unit_id and value is the subset of waveforms indices that should be
            be displayed (matplotlib backend)
        max_spikes_per_unit : int or None, default: 50
            If given and unit_selected_waveforms is None, only max_spikes_per_unit random units are
            displayed per waveform, (matplotlib backend)
        scale : float, default: 1
            Scale factor for the waveforms/templates (matplotlib backend)
        widen_narrow_scale : float, default: 1
            Scale factor for the x-axis of the waveforms/templates (matplotlib backend)
        axis_equal : bool, default: False
            Equal aspect ratio for x and y axis, to visualize the array geometry to scale
        lw_waveforms : float, default: 1
            Line width for the waveforms, (matplotlib backend)
        lw_templates : float, default: 2
            Line width for the templates, (matplotlib backend)
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        alpha_waveforms : float, default: 0.5
            Alpha value for waveforms (matplotlib backend)
        alpha_templates : float, default: 1
            Alpha value for templates, (matplotlib backend)
        shade_templates : bool, default: True
            If True, templates are shaded, see templates_percentile_shading argument
        templates_percentile_shading : float, tuple/list of floats, or None, default: (1, 25, 75, 99)
            It controls the shading of the templates.
            If None, the shading is +/- the standard deviation of the templates.
            If float, it controls the percentile of the template values used to shade the templates.
            Note that it is one-sided : if 5 is given, the 5th and 95th percentiles are used to shade
            the templates. If list of floats, it needs to be have an even number of elements which control
            the lower and upper percentile used to shade the templates. The first half of the elements
            are used for the lower bounds, and the second half for the upper bounds.
            Inner elements produce darker shadings. For sortingview backend only 2 or 4 elements are
            supported.
        scalebar : bool, default: False
            Display a scale bar on the waveforms plot (matplotlib backend)
        hide_unit_selector : bool, default: False
            For sortingview backend, if True the unit selector is not displayed
        same_axis : bool, default: False
            If True, waveforms and templates are displayed on the same axis (matplotlib backend)
        x_offset_units : bool, default: False
            In case same_axis is True, this parameter allow to x-offset the waveforms for different units
            (recommended for a few units) (matlotlib backend)
        plot_legend : bool, default: True
            Display legend (matplotlib backend)
        backend: str
        
        * matplotlib
        * ipywidgets
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
        * ipywidgets:
    
            * width_cm: Width of the figure in cm, default: 10
            * height_cm: Height of the figure in cm, default 6
            * display: If True, widgets are immediately displayed, default: True
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer_or_templates: 'SortingAnalyzer | Templates', channel_ids=None, unit_ids=None, plot_waveforms=True, plot_templates=True, plot_channels=False, unit_colors=None, sparsity=None, ncols=5, scale=1, widen_narrow_scale=1, lw_waveforms=1, lw_templates=2, axis_equal=False, unit_selected_waveforms=None, max_spikes_per_unit=50, set_title=True, same_axis=False, shade_templates=True, templates_percentile_shading=(1, 25, 75, 99), scalebar=False, x_offset_units=False, alpha_waveforms=0.5, alpha_templates=1, hide_unit_selector=False, plot_legend=True, backend=None, **backend_kwargs)
  Method: plot_ipywidgets(self, data_plot, **backend_kwargs)
    Docstring:
      None
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Class: plot_unit_waveforms_density_map
  Docstring:
        Plots unit waveforms using heat map density.
    
        Parameters
        ----------
        sorting_analyzer : SortingAnalyzer
            The SortingAnalyzer for calculating waveforms
        channel_ids : list or None, default: None
            The channel ids to display
        unit_ids : list or None, default: None
            List of unit ids
        sparsity : ChannelSparsity or None, default: None
            Optional ChannelSparsity to apply
            If SortingAnalyzer is already sparse, the argument is ignored
        use_max_channel : bool, default: False
            Use only the max channel
        peak_sign : "neg" | "pos" | "both", default: "neg"
            Used to detect max channel only when use_max_channel=True
        unit_colors : dict | None, default: None
            Dict of colors with unit ids as keys and colors as values. Colors can be any type accepted
            by matplotlib. If None, default colors are chosen using the `get_some_colors` function.
        same_axis : bool, default: False
            If True then all density are plot on the same axis and then channels is the union
            all channel per units
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, sorting_analyzer, channel_ids=None, unit_ids=None, sparsity=None, same_axis=False, use_max_channel=False, peak_sign='neg', unit_colors=None, backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None

Function: set_default_plotter_backend(backend)
  Docstring:
    None

Class: wcls
  Docstring:
        Plots CollisionGTComparison pair by pair orderer by cosine_similarity for all
        cases in a study.
    
        Parameters
        ----------
        study : CollisionGTStudy
            The collision study object.
        case_keys : list or None
            A selection of cases to plot, if None, then all.
        metric : "cosine_similarity"
            metric for ordering
        similarity_bins : array
            if mode is "lines", the bins used to average the pairs
        cmap : string
            colormap used to show averages if mode is "lines"
        good_only : False
            keep only the pairs with a non zero accuracy (found templates)
        min_accuracy : float
            If good only, the minimum accuracy every cell should have, individually, to be
            considered in a putative pair
        backend: str
        
        * matplotlib
    
    **backend_kwargs: kwargs
        
        * matplotlib:
    
            * figure: Matplotlib figure. When None, it is created, default: None
            * ax: Single matplotlib axis. When None, it is created, default: None
            * axes: Multiple matplotlib axes. When None, they is created, default: None
            * ncols: Number of columns to create in subplots, default: 5
            * figsize: Size of matplotlib figure, default: None
            * figtitle: The figure title, default: None
    
    
    Returns
    -------
    w : BaseWidget
        The output widget object.
    
    Notes
    -----
    When using the matplotlib backend, the returned `BaseWidget` contains the matplotlib fig and axis objects. This allows
    customization of plots using matplotlib machinery e.g. `returned_widget.ax.set_xlim((0,100))`.
        
  __init__(self, study, case_keys=None, metric='cosine_similarity', similarity_bins=array([-4.00000000e-01, -2.00000000e-01, -5.55111512e-17,  2.00000000e-01,
        4.00000000e-01,  6.00000000e-01,  8.00000000e-01,  1.00000000e+00]), show_legend=False, ylim=(0.5, 1), good_only=False, min_accuracy=0.9, cmap='winter', backend=None, **backend_kwargs)
  Method: plot_matplotlib(self, data_plot, **backend_kwargs)
    Docstring:
      None
