Previous topic

music21.vexflow.base

Next topic

music21.volume

Table Of Contents

Table Of Contents

music21.voiceLeading

Objects to represent unique elements in a score that contain special analysis routines to identify certain aspects of music theory. for use especially with theoryAnalyzer, which will divide a score up into these segments, returning a list of segments to later analyze

The list of objects included here are:

music21.voiceLeading.getVerticalSliceFromObject(music21Obj, scoreObjectIsFrom, classFilterList=None)

returns the VerticalSlice object given a score, and a music21 object within this score (under development)

>>> from music21 import *
>>> c = corpus.parse('bach/bwv66.6')
>>> n1 = c.flat.getElementsByClass(note.Note)[0]
>>> voiceLeading.getVerticalSliceFromObject(n1, c)
<music21.voiceLeading.VerticalSlice contentDict={0: [<music21.note.Note C#>], 1: [<music21.note.Note E>], 2: [<music21.note.Note A>], 3: [<music21.note.Note A>]}

VoiceLeadingQuartet

Inherits from: Music21Object, JSONSerializer

class music21.voiceLeading.VoiceLeadingQuartet(v1n1=None, v1n2=None, v2n1=None, v2n2=None, key=C major)

An object consisting of four pitches: v1n1, v1n2, v2n1, v2n2 where v1n1 moves to v1n2 at the same time as v2n1 moves to v2n2. (v1n1: voice 1(top voice), note 1 (left most note) )

Necessary for classifying types of voice-leading motion

VoiceLeadingQuartet attributes

fifth

class level reference interval

octave

class level reference interval

unison

class level reference interval

hIntervals

list of the two melodic intervals present, v1n1 to v1n2 and v2n1 to v2n2

vIntervals

list of the two harmonic intervals present, vn1n1 to v2n1 and v1n2 to v2n2

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

VoiceLeadingQuartet properties

key

set the key of this voiceleading quartet, for use in theory analysis routines such as closesIncorrectly. The default key is C major

>>> from music21 import *
>>> vlq = VoiceLeadingQuartet('D','G','B','G')
>>> vlq.key
<music21.key.Key of C major>
>>> vlq.key = 'G'
>>> vlq.key
<music21.key.Key of G major>
v1n1

set note1 for voice 1

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F')
>>> vl.v1n1
<music21.note.Note C>
v1n2

set note 2 for voice 1

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F')
>>> vl.v1n2
<music21.note.Note D>
v2n1

set note 1 for voice 2

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F')
>>> vl.v2n1
<music21.note.Note E>
v2n2

set note 2 for voice 2

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F')
>>> vl.v2n2
<music21.note.Note F>

Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, isGrace, measureNumber, offset, priority, seconds

Properties inherited from JSONSerializer: json

VoiceLeadingQuartet methods

antiParallelMotion(simpleName=None)

Returns true if the simple interval before is the same as the simple interval after and the motion is contrary. if simpleName is specified as an Interval object or a string then it only returns true if the simpleName of both intervals is the same as simpleName (i.e., use to find antiParallel fifths)

>>> from music21 import *
>>> n11 = note.Note("C4")
>>> n12 = note.Note("D3") # descending 7th
>>> n21 = note.Note("G4")
>>> n22 = note.Note("A4") # ascending 2nd
>>> vlq1 = voiceLeading.VoiceLeadingQuartet(n11, n12, n21, n22)
>>> vlq1.antiParallelMotion()
True
>>> vlq1.antiParallelMotion('M2')
False
>>> vlq1.antiParallelMotion('P5')
True

We can also use interval objects

>>> p5Obj = interval.Interval("P5")
>>> p8Obj = interval.Interval('P8')
>>> vlq1.antiParallelMotion(p5Obj)
True
>>> p8Obj = interval.Interval('P8')
>>> vlq1.antiParallelMotion(p8Obj)
False

>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('G3')
>>> vl2 = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl2.antiParallelMotion()
False
closesIncorrectly()

In the style of 16th century Counterpoint (not Bach Chorale style)

returns true if closing harmonic interval is a P8 or PU and the interval approaching the close is 6 - 8, 10 - 8, or 3 - U. Must be in contrary motion, and if in minor key, the leading tone resolves to the tonic.

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('C#', 'D', 'E', 'D')
>>> vl.key = key.Key('d')
>>> vl.closesIncorrectly()
False
>>> vl = VoiceLeadingQuartet('B3', 'C4', 'G3', 'C2')
>>> vl.key = key.Key('C')
>>> vl.closesIncorrectly()
False
>>> vl = VoiceLeadingQuartet('F', 'G', 'D', 'G')
>>> vl.key = key.Key('g')
>>> vl.closesIncorrectly()
True
>>> vl = VoiceLeadingQuartet('C#4', 'D4', 'A2', 'D3', key='D')
>>> vl.closesIncorrectly()
True
contraryMotion()

returns True if both voices move in opposite directions

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('G4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.contraryMotion() #no motion, so oblique motion will give False
False
>>> n2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.contraryMotion()
False
>>> m2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.contraryMotion()
False
>>> m2 = note.Note('A5')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.contraryMotion()
False
>>> m2 = note.Note('C4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.contraryMotion()
True
hiddenFifth()

calls hiddenInterval() by passing a fifth

hiddenInterval(thisInterval)

n.b. – this method finds ALL hidden intervals, not just those that are forbidden under traditional common practice counterpoint rules. Takes thisInterval, an Interval object.

>>> from music21 import *
>>> n1 = note.Note('C4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('B4')
>>> m2 = note.Note('D5')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.hiddenInterval(Interval('P5'))
True
>>> n1 = note.Note('E4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.hiddenInterval(Interval('P5'))
False
>>> m2.octave = 6
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.hiddenInterval(Interval('P5'))
False
hiddenOctave()

calls hiddenInterval by passing an octave

improperResolution()

checks whether the voice-leading quartet resolves correctly according to standard counterpoint rules. If the first harmony is dissonant (d5, A4, or m7) it checks that these are correctly resolved. If the first harmony is consonant, True is returned.

The key parameter should be specified to check for motion in the bass from specific note degrees. Default key is C Major.

Diminished Fifth: in by contrary motion to a third, with 7 resolving up to 1 in the bass Augmented Fourth: out by contrary motion to a sixth, with chordal seventh resolving down to a third in the bass. Minor Seventh: In to a third with a leap form 5 to 1 in the bass

>>> from music21 import *
>>> n1 = note.Note('B-4')
>>> n2 = note.Note('A4')
>>> m1 = note.Note('E4')
>>> m2 = note.Note('F4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.improperResolution() #d5
True

>>> n1 = note.Note('E5')
>>> n2 = note.Note('F5')
>>> m1 = note.Note('B-4')
>>> m2 = note.Note('A4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.improperResolution() #A4
True
>>> n1 = note.Note('B-4')
>>> n2 = note.Note('A4')
>>> m1 = note.Note('C4')
>>> m2 = note.Note('F4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.improperResolution() #m7
True

>>> n1 = note.Note('C4')
>>> n2 = note.Note('D4')
>>> m1 = note.Note('F4')
>>> m2 = note.Note('G4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.improperResolution() #not dissonant, true returned
False
>>> vl = VoiceLeadingQuartet('B-4', 'A4', 'C2', 'F2')
>>> vl.key = key.Key('F')
>>> vl.improperResolution() #not dissonant, true returned
False
inwardContraryMotion()

Returns true if both voices move inward by contrary motion

>>> from music21 import *
>>> n1 = note.Note('C5')
>>> n2 = note.Note('B4')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('A4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.inwardContraryMotion()
True
>>> vl.outwardContraryMotion()
False
leapNotSetWithStep()

returns true if there is a leap or skip in once voice then the other voice must be a step or unison. if neither part skips then False is returned. Returns False if the two voices skip thirds in contrary motion.

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('C5')
>>> m1 = note.Note('B3')
>>> m2 = note.Note('A3')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.leapNotSetWithStep()
False

>>> n1 = note.Note('G4')
>>> n2 = note.Note('C5')
>>> m1 = note.Note('B3')
>>> m2 = note.Note('F3')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.leapNotSetWithStep()
True
>>> vl = VoiceLeadingQuartet('E', 'G', 'G', 'E')
>>> vl.leapNotSetWithStep()
False
motionType()

returns the type of motion (‘Oblique’, ‘Parallel’, ‘Similar’, ‘Contrary’) that exists in this voice leading quartet

>>> from music21 import *
>>> n1 = note.Note('D4')
>>> n2 = note.Note('E4')
>>> m1 = note.Note('F4')
>>> m2 = note.Note('B4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.motionType()
'Similar'

>>> n1 = note.Note('A4')
>>> n2 = note.Note('C5')
>>> m1 = note.Note('D4')
>>> m2 = note.Note('F4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.motionType()
'Parallel'
noMotion()

Returns true if no voice moves in this “voice-leading” moment

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('D4')
>>> m2 = note.Note('D4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.noMotion()
True
>>> n2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.noMotion()
False
obliqueMotion()

Returns true if one voice remains the same and another moves. i.e., noMotion must be False if obliqueMotion is True.

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('D4')
>>> m2 = note.Note('D4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.obliqueMotion()
False
>>> n2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.obliqueMotion()
True
>>> m2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.obliqueMotion()
False
opensIncorrectly()

In the style of 16th century Counterpoint (not Bach Chorale style)

Returns true if the opening or second harmonic interval is PU, P8, or P5, to accommodate an anacrusis. also checks to see if opening establishes tonic or dominant harmony (uses identifyAsTonicOrDominant()

>>> from music21 import *
>>> vl = VoiceLeadingQuartet('D','D','D','F#')
>>> vl.key = 'D'
>>> vl.opensIncorrectly()
False
>>> vl = VoiceLeadingQuartet('B','A','G#','A')
>>> vl.key = 'A'
>>> vl.opensIncorrectly()
False
>>> vl = VoiceLeadingQuartet('A', 'A', 'F#', 'D')
>>> vl.key = 'A'
>>> vl.opensIncorrectly()
False

>>> vl = VoiceLeadingQuartet('C#', 'C#', 'D', 'E')
>>> vl.key = 'A'
>>> vl.opensIncorrectly()
True
>>> vl = VoiceLeadingQuartet('B', 'B', 'A', 'A')
>>> vl.key = 'C'
>>> vl.opensIncorrectly()
True
outwardContraryMotion()

Returns true if both voices move outward by contrary motion

>>> from music21 import *
>>> n1 = note.Note('D5')
>>> n2 = note.Note('E5')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('F4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.outwardContraryMotion()
True
>>> vl.inwardContraryMotion()
False
parallelFifth()

Returns true if the motion is a parallel Perfect Fifth (or antiparallel) or Octave duplication

>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("G4"), Note("A4")).parallelFifth()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("G5"), Note("A5")).parallelFifth()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D#4"), Note("G4"), Note("A4")).parallelFifth()
False
parallelInterval(thisInterval)

Returns true if there is a parallel motion or antiParallel motion of this type (thisInterval should be an Interval object)

>>> n11 = Note("C4")
>>> n12a = Note("D4") # ascending 2nd
>>> n12b = Note("D3") # descending 7th

>>> n21 = Note("G4")
>>> n22a = Note("A4") # ascending 2nd
>>> n22b = Note("B4") # ascending 3rd
>>> vlq1 = VoiceLeadingQuartet(n11, n12a, n21, n22a)
>>> vlq1.parallelInterval(Interval("P5"))
True
>>> vlq1.parallelInterval(Interval("P8"))
False

Antiparallel fifths also are true

>>> vlq2 = VoiceLeadingQuartet(n11, n12b, n21, n22a)
>>> vlq2.parallelInterval(Interval("P5"))
True

Non-parallel intervals are, of course, False

>>> vlq3 = VoiceLeadingQuartet(n11, n12a, n21, n22b)
>>> vlq3.parallelInterval(Interval("P5"))
False
parallelMotion(requiredInterval=None)

returns True if both voices move with the same interval or an octave duplicate of the interval. if requiredInterval is given then returns True only if the parallel interval is that simple interval.

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('G4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.parallelMotion() #no motion, so oblique motion will give False
False
>>> n2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.parallelMotion()
False
>>> m2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.parallelMotion()
True
>>> vl.parallelMotion('P8')
True
>>> vl.parallelMotion('M6')
False

>>> m2 = note.Note('A5')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.parallelMotion()
False
parallelOctave()

Returns true if the motion is a parallel Perfect Octave [ a concept so abhorrent we shudder to illustrate it with an example, but alas, we must ]

>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C5"), Note("D5")).parallelOctave()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C6"), Note("D6")).parallelOctave()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelOctave()
False
parallelUnison()

Returns true if the motion is a parallel Perfect Unison (and not Perfect Octave, etc.)

>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelUnison()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C5"), Note("D5")).parallelUnison()
False
parallelUnisonOrOctave()

returns true if voice leading quartet is either motion by parallel octave or unison

>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C3"), Note("D3")).parallelUnisonOrOctave()
True
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelUnisonOrOctave()
True
similarMotion()

Returns true if the two voices both move in the same direction. Parallel Motion will also return true, as it is a special case of similar motion. If there is no motion, returns False.

>>> from music21 import *
>>> n1 = note.Note('G4')
>>> n2 = note.Note('G4')
>>> m1 = note.Note('G4')
>>> m2 = note.Note('G4')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.similarMotion()
False
>>> n2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.similarMotion()
False
>>> m2.octave = 5
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.similarMotion()
True
>>> m2 = note.Note('A5')
>>> vl = VoiceLeadingQuartet(n1, n2, m1, m2)
>>> vl.similarMotion()
True

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

ThreeNoteLinearSegment

Inherits from: NNoteLinearSegment, Music21Object, JSONSerializer

class music21.voiceLeading.ThreeNoteLinearSegment(noteListorn1=None, n2=None, n3=None)

An object consisting of three sequential notes

The middle tone in a ThreeNoteLinearSegment can be classified using methods enclosed in this class to identify it as types of embellishing tones. Further methods can be used on the entire stream to identify these as non-harmonic.

Accepts a sequence of strings, pitches, or notes.

>>> from music21 import *
>>> ex = voiceLeading.ThreeNoteLinearSegment('C#4','D4','E-4')
>>> ex.n1
<music21.note.Note C#>
>>> ex.n2
<music21.note.Note D>
>>> ex.n3
<music21.note.Note E->

>>> ex = voiceLeading.ThreeNoteLinearSegment(note.Note('A4'),note.Note('D4'),'F5')
>>> ex.n1
<music21.note.Note A>
>>> ex.n2
<music21.note.Note D>
>>> ex.n3
<music21.note.Note F>
>>> ex.iLeftToRight
<music21.interval.Interval m6>
>>> ex.iLeft
<music21.interval.Interval P-5>
>>> ex.iRight
<music21.interval.Interval m10>

if no octave specified, default octave of 4 is assumed

>>> ex2 = voiceLeading.ThreeNoteLinearSegment('a','b','c')
>>> ex2.n1
<music21.note.Note A>
>>> ex2.n1.pitch.defaultOctave
4

ThreeNoteLinearSegment attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

ThreeNoteLinearSegment properties

iLeft

get the interval between the left-most note and the middle note (read-only property)

>>> from music21 import *
>>> tnls = ThreeNoteLinearSegment('A','B','G')
>>> tnls.iLeft
<music21.interval.Interval M2>
iLeftToRight

get the interval between the left-most note and the right-most note (read-only property)

>>> from music21 import *
>>> tnls = ThreeNoteLinearSegment('C', 'E','G')
>>> tnls.iLeftToRight
<music21.interval.Interval P5>
iRight

get the interval between the middle note and the right-most note (read-only property)

>>> from music21 import *
>>> tnls = ThreeNoteLinearSegment('A','B','G')
>>> tnls.iRight
<music21.interval.Interval M-3>
n1

get or set the first note (left-most) in the segment

n2

get or set the middle note in the segment

n3

get or set the last note (right-most) in the segment

Properties inherited from NNoteLinearSegment: melodicIntervals, noteList

Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, isGrace, measureNumber, offset, priority, seconds

Properties inherited from JSONSerializer: json

ThreeNoteLinearSegment methods

couldBePassingTone()

checks if the two intervals are steps and if these steps are moving in the same direction. Returns true if the tone is identified as either a chromatic passing tone or a diatonic passing tone. Only major and minor diatonic passing tones are recognized (not pentatonic or scales beyond twelve-notes). Does NOT check if tone is non harmonic

Accepts pitch or note objects; method is dependent on octave information

>>> from music21 import *
>>> voiceLeading.ThreeNoteLinearSegment('C#4','D4','E-4').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('E-3','F3','G-3').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('C3','C3','C3').couldBePassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('A3','C3','D3').couldBePassingTone()
False

Directionality must be maintained

>>> voiceLeading.ThreeNoteLinearSegment('B##3','C4','D--4').couldBePassingTone()
False

If no octave is given then ._defaultOctave is used. This is generally octave 4

>>> voiceLeading.ThreeNoteLinearSegment('C','D','E').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('C4','D','E').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('C5','D','E').couldBePassingTone()
False

Method returns true if either a chromatic passing tone or a diatonic passing tone is identified. Spelling of the pitch does matter!

>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','B##3').couldBePassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('A##3','C4','E---4').couldBePassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','D-4').couldBePassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBePassingTone()
True
couldBeDiatonicPassingTone()

A note could be a diatonic passing tone (and therefore a passing tone in general) if the generic interval between the previous and the current is 2 or -2; same for the next; and both move in the same direction (that is, the two intervals multiplied by each other are 4, not -4).

>>> from music21 import *
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeDiatonicPassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBeDiatonicPassingTone()
True
couldBeChromaticPassingTone()

A note could a chromatic passing tone (and therefore a passing tone in general) if the generic interval between the previous and the current is -2, 1, or 2; the generic interval between the current and next is -2, 1, 2; the two generic intervals multiply to -2 or 2 (if 4 then it’s a diatonic interval; if 1 then not a passing tone; i.e, C -> C# -> C## is not a chromatic passing tone); AND between each of the notes there is a chromatic interval of 1 or -1 and multiplied together it is 1. (i.e.: C -> D– -> D- is not a chromatic passing tone).

>>> from music21 import *
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeChromaticPassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeChromaticPassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B3','B#3','C#4').couldBeChromaticPassingTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B3','D-4','C#4').couldBeChromaticPassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('B3','C##4','C#4').couldBeChromaticPassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('C#4','C4','C##4').couldBeChromaticPassingTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('D--4','C4','D-4').couldBeChromaticPassingTone()
False
couldBeNeighborTone()

checks if noteToAnalyze could be a neighbor tone, either a diatonic neighbor tone or a chromatic neighbor tone. Does NOT check if tone is non harmonic

>>> from music21 import *
>>> voiceLeading.ThreeNoteLinearSegment('E3','F3','E3').couldBeNeighborTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B-4','C5','B-4').couldBeNeighborTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('B4','C5','B4').couldBeNeighborTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('G4','F#4','G4').couldBeNeighborTone()
True
>>> voiceLeading.ThreeNoteLinearSegment('E-3','F3','E-4').couldBeNeighborTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBeNeighborTone()
False
>>> voiceLeading.ThreeNoteLinearSegment('A3','C3','D3').couldBeNeighborTone()
False
couldBeDiatonicNeighborTone()

returns true if and only if noteToAnalyze could be a diatonic neighbor tone, that is, the left and right notes are identical while the middle is a diatonic step up or down

>>> from music21 import *
>>> ThreeNoteLinearSegment('C3','D3','C3').couldBeDiatonicNeighborTone()
True
>>> ThreeNoteLinearSegment('C3','C#3','C3').couldBeDiatonicNeighborTone()
False
>>> ThreeNoteLinearSegment('C3','D-3','C3').couldBeDiatonicNeighborTone()
False
couldBeChromaticNeighborTone()

returns true if and only if noteToAnalyze could be a chromatic neighbor tone, that is, the left and right notes are identical while the middle is a chromatic step up or down

>>> from music21 import *
>>> ThreeNoteLinearSegment('C3','D3','C3').couldBeChromaticNeighborTone()
False
>>> ThreeNoteLinearSegment('C3','D-3','C3').couldBeChromaticNeighborTone()
True
>>> ThreeNoteLinearSegment('C#3','D3','C#3').couldBeChromaticNeighborTone()
True
>>> ThreeNoteLinearSegment('C#3','D3','D-3').couldBeChromaticNeighborTone()
False
color(color='red', noteList=[2])

color all the notes in noteList (1,2,3). Default is to color only the second note red

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), searchActiveSiteByAttr(), setContextAttr(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

VerticalSlice

Inherits from: Music21Object, JSONSerializer

class music21.voiceLeading.VerticalSlice(contentDict)
A vertical slice object provides more accessible information about

vertical moments in a score. A vertical slice is instantiated by passing in a dictionary of the form {partNumber : [ music21Objects ] } To create vertical slices out of a score, call by getVerticalSlices()

Vertical slices are useful to provide direct and easy access to objects in a part. A list of vertical slices, although similar to the list of chords from a chordified score, provides easier access to partnumber information and identity of objects in the score. Plus, the objects in a vertical slice points directly to the objects in the score, so modifying a vertical slice taken from a score is the same as modyfing the elements of the vertical slice in the score directly.

>>> from music21 import *
>>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('Cm')], 1: [note.Note('F2')]})
>>> vs1.getObjectsByClass(note.Note)
[<music21.note.Note A>, <music21.note.Note F>]
>>> vs1.getObjectsByPart(0, note.Note)
<music21.note.Note A>

VerticalSlice attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

VerticalSlice properties

color

sets the color of each element in the vertical slice

>>> from music21 import *
>>> vs1 = voiceLeading.VerticalSlice({1:note.Note('C'), 2:harmony.ChordSymbol('C')})
>>> vs1.color = 'blue'
>>> [x.color for x in vs1.objects]
['blue', 'blue']
lyric

sets each element on the vertical slice to have the passed in lyric

>>> from music21 import *
>>> h = voiceLeading.VerticalSlice({1:note.Note('C'), 2:harmony.ChordSymbol('C')})
>>> h.lyric = 'vertical slice 1'
>>> h.getStream().flat.getElementsByClass(note.Note)[0].lyric
'vertical slice 1'
objects

return a list of all the music21 objects in the vertical slice

>>> from music21 import *
>>> vs1 = VerticalSlice({0:[ harmony.ChordSymbol('C'), note.Note('A4'),], 1: [note.Note('C')]})
>>> vs1.objects
[<music21.harmony.ChordSymbol C>, <music21.note.Note A>, <music21.note.Note C>]

Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, isGrace, measureNumber, priority, seconds

Properties inherited from JSONSerializer: json

VerticalSlice methods

changeDurationofAllObjects(newQuarterLength)

changes the duration of all objects in vertical slice

>>> from music21 import *
>>> n1 =  note.Note('C4')
>>> n1.quarterLength = 1
>>> n2 =  note.Note('G4')
>>> n2.quarterLength = 2
>>> cs = harmony.ChordSymbol('C')
>>> cs.quarterLength = 4
>>> vs1 = VerticalSlice({0:n1, 1:n2, 2:cs})
>>> vs1.changeDurationofAllObjects(1.5)
>>> [x.quarterLength for x in vs1.objects]
[1.5, 1.5, 1.5]
getChord()

extracts all simultaneously sounding pitches (from chords, notes, harmony objects, etc.) and returns as a chord. Pretty much returns the vertical slice to a chordified output.

>>> from music21 import *
>>> vs1 = VerticalSlice({0:note.Note('A4'), 1:chord.Chord(['B','C','A']), 2:note.Note('A')})
>>> vs1.getChord()
<music21.chord.Chord A4 B C A A>
>>> VerticalSlice({0:note.Note('A3'), 1:chord.Chord(['F3','D4','A4']), 2:harmony.ChordSymbol('Am')}).getChord()
<music21.chord.Chord A3 F3 D4 A4 A2 C3 E3>
getLongestDuration()

returns the longest duration that exists among all elements

>>> from music21 import *
>>> n1 =  note.Note('C4')
>>> n1.quarterLength = 1
>>> n2 =  note.Note('G4')
>>> n2.quarterLength = 2
>>> cs = harmony.ChordSymbol('C')
>>> cs.quarterLength = 4
>>> vs1 = VerticalSlice({0:n1, 1:n2, 2:cs})
>>> vs1.getLongestDuration()
4.0
getObjectsByClass(classFilterList, partNums=None)

returns a list of all objects in the vertical slice of a type contained in the classFilterList. Optionally specify partnumbers to only search for matching objects

>>> from music21 import *
>>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('C')], 1: [note.Note('C')], 2: [note.Note('B'), note.Note('F#')]})
>>> vs1.getObjectsByClass('Note')
[<music21.note.Note A>, <music21.note.Note C>, <music21.note.Note B>, <music21.note.Note F#>]
>>> vs1.getObjectsByClass('Note', [1,2])
[<music21.note.Note C>, <music21.note.Note B>, <music21.note.Note F#>]
getObjectsByPart(partNum, classFilterList=None)

returns the list of music21 objects associated with a given part number (if more than one). returns the single object if only one. Optionally specify which type of objects to return with classFilterList

>>> from music21 import *
>>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('C')], 1: [note.Note('C')]})
>>> vs1.getObjectsByPart(0, classFilterList=['Harmony'])
<music21.harmony.ChordSymbol C>
>>> vs1.getObjectsByPart(0)
[<music21.note.Note A>, <music21.harmony.ChordSymbol C>]
>>> vs1.getObjectsByPart(1)
<music21.note.Note C>
getShortestDuration()

returns the smallest quarterLength that exists among all elements

>>> from music21 import *
>>> n1 =  note.Note('C4')
>>> n1.quarterLength = 1
>>> n2 =  note.Note('G4')
>>> n2.quarterLength = 2
>>> cs = harmony.ChordSymbol('C')
>>> cs.quarterLength = 4
>>> vs1 = VerticalSlice({0:n1, 1:n2, 2:cs})
>>> vs1.getShortestDuration()
1.0
getStream(streamVSCameFrom=None)

returns the stream representation of this vertical slice. Optionally pass in the full stream that this verticalSlice was extracted from, and correct key, meter, and time signatures will be included (under development)

>>> from music21 import *
>>> vs1 = VerticalSlice({0:[ harmony.ChordSymbol('C'), note.Note('A4'),], 1: [note.Note('C')]})
>>> len(vs1.getStream().flat.getElementsByClass(note.Note))
2
>>> len(vs1.getStream().flat.getElementsByClass('Harmony'))
1
isConsonant()

evaluates whether this vertical slice moment is consonant or dissonant according to the common-practice consonance rules. Method generates chord of all simultaneously sounding pitches, then calls isConsonant()

>>> from music21 import *
>>> VerticalSlice({0:note.Note('A4'), 1:note.Note('B4'), 2:note.Note('A4')}).isConsonant()
False
>>> VerticalSlice({0:note.Note('A4'), 1:note.Note('B4'), 2:note.Note('C#4')}).isConsonant()
False
>>> VerticalSlice({0:note.Note('C3'), 1:note.Note('G5'), 2:chord.Chord(['C3','E4','G5'])}).isConsonant()
True
>>> VerticalSlice({0:note.Note('A3'), 1:note.Note('B3'), 2:note.Note('C4')}).isConsonant()
False
>>> VerticalSlice({0:note.Note('C1'), 1:note.Note('C2'), 2:note.Note('C3'), 3:note.Note('G1'), 4:note.Note('G2'), 5:note.Note('G3')}).isConsonant()
True
>>> VerticalSlice({0:note.Note('A3'), 1:harmony.ChordSymbol('Am')}).isConsonant()
True
makeAllLargestDuration()

locates the largest duration of all elements in the vertical slice and assigns this duration to each element

>>> from music21 import *
>>> n1 =  note.Note('C4')
>>> n1.quarterLength = 1
>>> n2 =  note.Note('G4')
>>> n2.quarterLength = 2
>>> cs = harmony.ChordSymbol('C')
>>> cs.quarterLength = 4
>>> vs1 = VerticalSlice({0:n1, 1:n2, 2:cs})
>>> vs1.makeAllLargestDuration()
>>> [x.quarterLength for x in vs1.objects]
[4.0, 4.0, 4.0]
makeAllSmallestDuration()

locates the smallest duration of all elements in the vertical slice and assigns this duration to each element

>>> from music21 import *
>>> n1 =  note.Note('C4')
>>> n1.quarterLength = 1
>>> n2 =  note.Note('G4')
>>> n2.quarterLength = 2
>>> cs = harmony.ChordSymbol('C')
>>> cs.quarterLength = 4
>>> vs1 = VerticalSlice({0:n1, 1:n2, 2:cs})
>>> vs1.makeAllSmallestDuration()
>>> [x.quarterLength for x in vs1.objects]
[1.0, 1.0, 1.0]
offset(leftAlign=True)

returns the overall offset of the vertical slice. Typically, this would just be the offset of each object in the vertical slice, and each object would have the same offset. However, if the duration of one object in the slice is different than the duration of another, and that other starts after the first, but the first is still sounding, then the offsets would be different. In this case, specify leftAlign=True to return the lowest valued-offset of all the objects in the vertical slice. If you prefer the offset of the right-most starting object, then specify leftAlign=False

>>> from music21 import *
>>> s = stream.Score()
>>> n1 = note.Note('A4', quarterLength=1.0)
>>> s.append(n1)
>>> n1.offset
0.0
>>> n2 = note.Note('F2', quarterLength =0.5)
>>> s.append(n2)
>>> n2.offset
1.0
>>> vs = VerticalSlice({0:n1, 1: n2})
>>> vs.getObjectsByClass(note.Note)
[<music21.note.Note A>, <music21.note.Note F>]

>>> vs.offset(leftAlign=True)
0.0
>>> vs.offset(leftAlign=False)
1.0

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

VerticalSliceNTuplet

Inherits from: Music21Object, JSONSerializer

class music21.voiceLeading.VerticalSliceNTuplet(listofVerticalSlices)

a collection of n number of vertical slices. These objects are useful when analyzing counterpoint motion and music theory elements such as passing tones

NChordLinearSegment

Inherits from: NObjectLinearSegment, Music21Object, JSONSerializer

class music21.voiceLeading.NChordLinearSegment(chordList)

NChordLinearSegment attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

NChordLinearSegment properties

chordList

returns a list of all chord symbols in this linear segment

>>> from music21 import *
>>> n = NChordLinearSegment([harmony.ChordSymbol('Am'), harmony.ChordSymbol('F7'), harmony.ChordSymbol('G9')])
>>> n.chordList
[<music21.harmony.ChordSymbol Am>, <music21.harmony.ChordSymbol F7>, <music21.harmony.ChordSymbol G9>]

Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, isGrace, measureNumber, offset, priority, seconds

Properties inherited from JSONSerializer: json

NChordLinearSegment methods

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

NNoteLinearSegment

Inherits from: Music21Object, JSONSerializer

class music21.voiceLeading.NNoteLinearSegment(noteList)

a list of n notes strung together in a sequence noteList = [note1, note2, note3, ..., note-n ] Once this object is created with a noteList, the noteList may not be changed

>>> from music21 import *
>>> n = NNoteLinearSegment(['A', 'C', 'D'])
>>> n.noteList
[<music21.note.Note A>, <music21.note.Note C>, <music21.note.Note D>]

NNoteLinearSegment attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

NNoteLinearSegment properties

melodicIntervals

calculates the melodic intervals and returns them as a list, with the interval at 0 being the interval between the first and second note.

>>> from music21 import *
>>> n = NNoteLinearSegment([note.Note('A'), note.Note('B'), note.Note('C'), note.Note('D')])
>>> n.melodicIntervals
[<music21.interval.Interval M2>, <music21.interval.Interval M-7>, <music21.interval.Interval M2>]
noteList
>>> from music21 import *
>>> n = NNoteLinearSegment(['A', 'B5', 'C', 'F#'])
>>> n.noteList
[<music21.note.Note A>, <music21.note.Note B>, <music21.note.Note C>, <music21.note.Note F#>]

Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, isGrace, measureNumber, offset, priority, seconds

Properties inherited from JSONSerializer: json

NNoteLinearSegment methods

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

NObjectLinearSegment

Inherits from: Music21Object, JSONSerializer

class music21.voiceLeading.NObjectLinearSegment(objectList)

TwoChordLinearSegment

Inherits from: NChordLinearSegment, NObjectLinearSegment, Music21Object, JSONSerializer

class music21.voiceLeading.TwoChordLinearSegment(chordList, chord2=None)

TwoChordLinearSegment attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

TwoChordLinearSegment properties

TwoChordLinearSegment methods

bassInterval()

returns the chromatic interval between the basses of the two chord symbols

>>> from music21 import *
>>> h = voiceLeading.TwoChordLinearSegment(harmony.ChordSymbol('C/E'), harmony.ChordSymbol('G'))
>>> h.bassInterval()
<music21.interval.ChromaticInterval 3>
rootInterval()

returns the chromatic interval between the roots of the two chord symbols

>>> from music21 import *
>>> h = voiceLeading.TwoChordLinearSegment([harmony.ChordSymbol('C'), harmony.ChordSymbol('G')])
>>> h.rootInterval()
<music21.interval.ChromaticInterval 7>

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

VerticalSliceTriplet

Inherits from: VerticalSliceNTuplet, Music21Object, JSONSerializer

class music21.voiceLeading.VerticalSliceTriplet(listofVerticalSlices)

a collection of three vertical slices

VerticalSliceTriplet attributes

Attributes inherited from Music21Object: classSortOrder, isSpanner, isStream, isVariant

VerticalSliceTriplet properties

VerticalSliceTriplet methods

hasNeighborTone(partNumToIdentify, unaccentedOnly=False)

return true if this vertical slice triplet contains a neighbor tone music21 currently identifies neighbor tones by analyzing both horizontal motion and vertical motion. It first checks to see if the note could be a neighbor tone based on the notes linearly adjacent to it. It then checks to see if the note’s vertical context is dissonant, while the vertical slices to the left and right are consonant

partNum is the part (starting with 0) to identify the passing tone for use on 3 vertical slices (3tuplet)

>>> from music21 import *
>>> vs1 = VerticalSlice({0:note.Note('E-4'), 1: note.Note('C3')})
>>> vs2 = VerticalSlice({0:note.Note('E-4'), 1: note.Note('B2')})
>>> vs3 = VerticalSlice({0:note.Note('C5'), 1: note.Note('C3')})
>>> tbtm = VerticalSliceTriplet([vs1, vs2, vs3])
>>> tbtm.hasNeighborTone(1)
True
hasPassingTone(partNumToIdentify, unaccentedOnly=False)

return true if this vertical slice triplet contains a passing tone music21 currently identifies passing tones by analyzing both horizontal motion and vertical motion. It first checks to see if the note could be a passing tone based on the notes linearly adjacent to it. It then checks to see if the note’s vertical context is dissonant, while the vertical slices to the left and right are consonant

partNum is the part (starting with 0) to identify the passing tone

>>> from music21 import *
>>> vs1 = VerticalSlice({0:note.Note('A4'), 1: note.Note('F2')})
>>> vs2 = VerticalSlice({0:note.Note('B-4'), 1: note.Note('F2')})
>>> vs3 = VerticalSlice({0:note.Note('C5'), 1: note.Note('E2')})
>>> tbtm = VerticalSliceTriplet([vs1, vs2, vs3])
>>> tbtm.hasPassingTone(0)
True
>>> tbtm.hasPassingTone(1)
False

Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSite(), hasSpannerSite(), hasVariantSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), purgeOrphans(), purgeUndeclaredIds(), removeLocationBySite(), removeLocationBySiteId(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()