>>> from music21 import *
>>> c = corpus.parse('bach')
>>> n1 = c.flat.getElementsByClass(note.Note)[0]
>>> voiceLeading.getVerticalSliceFromObject(n1, c)
<music21.voiceLeading.VerticalSlice contentDict={0: [<music21.note.Note F>], 1: [<music21.note.Note F>], 2: [<music21.note.Note C>], 3: [<music21.note.Note A>], 4: [<music21.note.Note F>]}
Inherits from: Music21Object, JSONSerializer
An object consisting of four pitches: v1n1, v1n2, v2n1, v2n2 where v1n1 moves to v1n2 at the same time as v2n1 moves to v2n2.
Necessary for classifying types of voice-leading motion
VoiceLeadingQuartet attributes
- fifth¶
An Interval class that encapsulates both Chromatic and Diatonic intervals all in one model.
The interval is specified either as named arguments, a DiatonicInterval and a ChromaticInterval, or two Note objects, from which both a ChromaticInterval and DiatonicInterval are derived.
>>> from music21 import * >>> n1 = note.Note('c3') >>> n2 = note.Note('c5') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P15> >>> aInterval.name 'P15'Reduce to a single octave:
>>> aInterval.simpleName 'P1'Reduce to no more than an octave:
>>> aInterval.semiSimpleName 'P8'An interval can also be specified directly
>>> aInterval = interval.Interval('m3') >>> aInterval <music21.interval.Interval m3> >>> aInterval = interval.Interval('M3') >>> aInterval <music21.interval.Interval M3>>>> aInterval = interval.Interval('p5') >>> aInterval <music21.interval.Interval P5> >>> aInterval.isChromaticStep False >>> aInterval.isDiatonicStep False >>> aInterval.isStep False>>> aInterval = interval.Interval('half') >>> aInterval <music21.interval.Interval m2> >>> aInterval.isChromaticStep True >>> aInterval.isDiatonicStep True >>> aInterval.isStep True>>> aInterval = interval.Interval('-h') >>> aInterval <music21.interval.Interval m-2> >>> aInterval.directedName 'm-2' >>> aInterval.name 'm2'>>> aInterval = interval.Interval(3) >>> aInterval <music21.interval.Interval m3>>>> aInterval = interval.Interval(7) >>> aInterval <music21.interval.Interval P5>>>> n1 = note.Note('c3') >>> n2 = note.Note('g3') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P5>>>> aInterval = interval.Interval(noteStart=n1, noteEnd=None) Traceback (most recent call last): IntervalException: either both the starting and the ending note.Note must be given or neither can be given. You cannot have one without the other.>>> aInterval = interval.DiatonicInterval('major', 'third') >>> bInterval = interval.ChromaticInterval(4) >>> cInterval = interval.Interval(diatonic=aInterval, chromatic=bInterval) >>> cInterval <music21.interval.Interval M3>>>> cInterval = interval.Interval(diatonic=aInterval, chromatic=None) Traceback (most recent call last): IntervalException: either both a DiatonicInterval and a ChromaticInterval object have to be given or neither can be given. You cannot have one without the other.Two Intervals are the same if their Chromatic and Diatonic intervals are the same. N.B. that interval.Interval(‘a4’) != ‘a4’ – maybe it should...
- octave¶
An Interval class that encapsulates both Chromatic and Diatonic intervals all in one model.
The interval is specified either as named arguments, a DiatonicInterval and a ChromaticInterval, or two Note objects, from which both a ChromaticInterval and DiatonicInterval are derived.
>>> from music21 import * >>> n1 = note.Note('c3') >>> n2 = note.Note('c5') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P15> >>> aInterval.name 'P15'Reduce to a single octave:
>>> aInterval.simpleName 'P1'Reduce to no more than an octave:
>>> aInterval.semiSimpleName 'P8'An interval can also be specified directly
>>> aInterval = interval.Interval('m3') >>> aInterval <music21.interval.Interval m3> >>> aInterval = interval.Interval('M3') >>> aInterval <music21.interval.Interval M3>>>> aInterval = interval.Interval('p5') >>> aInterval <music21.interval.Interval P5> >>> aInterval.isChromaticStep False >>> aInterval.isDiatonicStep False >>> aInterval.isStep False>>> aInterval = interval.Interval('half') >>> aInterval <music21.interval.Interval m2> >>> aInterval.isChromaticStep True >>> aInterval.isDiatonicStep True >>> aInterval.isStep True>>> aInterval = interval.Interval('-h') >>> aInterval <music21.interval.Interval m-2> >>> aInterval.directedName 'm-2' >>> aInterval.name 'm2'>>> aInterval = interval.Interval(3) >>> aInterval <music21.interval.Interval m3>>>> aInterval = interval.Interval(7) >>> aInterval <music21.interval.Interval P5>>>> n1 = note.Note('c3') >>> n2 = note.Note('g3') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P5>>>> aInterval = interval.Interval(noteStart=n1, noteEnd=None) Traceback (most recent call last): IntervalException: either both the starting and the ending note.Note must be given or neither can be given. You cannot have one without the other.>>> aInterval = interval.DiatonicInterval('major', 'third') >>> bInterval = interval.ChromaticInterval(4) >>> cInterval = interval.Interval(diatonic=aInterval, chromatic=bInterval) >>> cInterval <music21.interval.Interval M3>>>> cInterval = interval.Interval(diatonic=aInterval, chromatic=None) Traceback (most recent call last): IntervalException: either both a DiatonicInterval and a ChromaticInterval object have to be given or neither can be given. You cannot have one without the other.Two Intervals are the same if their Chromatic and Diatonic intervals are the same. N.B. that interval.Interval(‘a4’) != ‘a4’ – maybe it should...
- unison¶
An Interval class that encapsulates both Chromatic and Diatonic intervals all in one model.
The interval is specified either as named arguments, a DiatonicInterval and a ChromaticInterval, or two Note objects, from which both a ChromaticInterval and DiatonicInterval are derived.
>>> from music21 import * >>> n1 = note.Note('c3') >>> n2 = note.Note('c5') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P15> >>> aInterval.name 'P15'Reduce to a single octave:
>>> aInterval.simpleName 'P1'Reduce to no more than an octave:
>>> aInterval.semiSimpleName 'P8'An interval can also be specified directly
>>> aInterval = interval.Interval('m3') >>> aInterval <music21.interval.Interval m3> >>> aInterval = interval.Interval('M3') >>> aInterval <music21.interval.Interval M3>>>> aInterval = interval.Interval('p5') >>> aInterval <music21.interval.Interval P5> >>> aInterval.isChromaticStep False >>> aInterval.isDiatonicStep False >>> aInterval.isStep False>>> aInterval = interval.Interval('half') >>> aInterval <music21.interval.Interval m2> >>> aInterval.isChromaticStep True >>> aInterval.isDiatonicStep True >>> aInterval.isStep True>>> aInterval = interval.Interval('-h') >>> aInterval <music21.interval.Interval m-2> >>> aInterval.directedName 'm-2' >>> aInterval.name 'm2'>>> aInterval = interval.Interval(3) >>> aInterval <music21.interval.Interval m3>>>> aInterval = interval.Interval(7) >>> aInterval <music21.interval.Interval P5>>>> n1 = note.Note('c3') >>> n2 = note.Note('g3') >>> aInterval = interval.Interval(noteStart=n1, noteEnd=n2) >>> aInterval <music21.interval.Interval P5>>>> aInterval = interval.Interval(noteStart=n1, noteEnd=None) Traceback (most recent call last): IntervalException: either both the starting and the ending note.Note must be given or neither can be given. You cannot have one without the other.>>> aInterval = interval.DiatonicInterval('major', 'third') >>> bInterval = interval.ChromaticInterval(4) >>> cInterval = interval.Interval(diatonic=aInterval, chromatic=bInterval) >>> cInterval <music21.interval.Interval M3>>>> cInterval = interval.Interval(diatonic=aInterval, chromatic=None) Traceback (most recent call last): IntervalException: either both a DiatonicInterval and a ChromaticInterval object have to be given or neither can be given. You cannot have one without the other.Two Intervals are the same if their Chromatic and Diatonic intervals are the same. N.B. that interval.Interval(‘a4’) != ‘a4’ – maybe it should...
Attributes without Documentation: hIntervals, vIntervals
Attributes inherited from Music21Object: classSortOrder, hideObjectOnPrint, id, isSpanner, isStream
VoiceLeadingQuartet properties
- key¶
No documentation.
- v1n1¶
>>> from music21 import * >>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F') >>> vl.v1n1 <music21.note.Note C>
- v1n2¶
>>> from music21 import * >>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F') >>> vl.v1n2 <music21.note.Note D>
- v2n1¶
>>> from music21 import * >>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F') >>> vl.v2n1 <music21.note.Note E>
- v2n2¶
>>> from music21 import * >>> vl = VoiceLeadingQuartet('C', 'D', 'E', 'F') >>> vl.v2n2 <music21.note.Note F>Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
VoiceLeadingQuartet methods
- antiParallelMotion(simpleName=None)¶
Returns true if the simple interval before is the same as the simple interval after and the motion is contrary. if simpleName is specified as an Interval object or a string then it only returns true if the simpleName of both intervals is the same as simpleName (i.e., use to find antiParallel fifths)
>>> from music21 import * >>> n11 = note.Note("C4") >>> n12 = note.Note("D3") # descending 7th >>> n21 = note.Note("G4") >>> n22 = note.Note("A4") # ascending 2nd >>> vlq1 = voiceLeading.VoiceLeadingQuartet(n11, n12, n21, n22) >>> vlq1.antiParallelMotion() True >>> vlq1.antiParallelMotion('M2') False >>> vlq1.antiParallelMotion('P5') TrueWe can also use interval objects >>> p5Obj = interval.Interval(“P5”) >>> p8Obj = interval.Interval(‘P8’) >>> vlq1.antiParallelMotion(p5Obj) True >>> p8Obj = interval.Interval(‘P8’) >>> vlq1.antiParallelMotion(p8Obj) False
>>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('G4') >>> m2 = note.Note('G3') >>> vl2 = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl2.antiParallelMotion() False
- closesIncorrectly()¶
returns true if closing harmonic interval is a P8 or PU and the interval approaching the close is 6 - 8, 10 - 8, or 3 - U. Must be in contrary motion, and if in minor key, the leading tone resolves to the tonic.
>>> from music21 import * >>> vl = VoiceLeadingQuartet('C#', 'D', 'E', 'D') >>> vl.key = key.Key('d') >>> vl.closesIncorrectly() False >>> vl = VoiceLeadingQuartet('B3', 'C4', 'G3', 'C2') >>> vl.key = key.Key('C') >>> vl.closesIncorrectly() False>>> vl = VoiceLeadingQuartet('F', 'G', 'D', 'G') >>> vl.key = key.Key('g') >>> vl.closesIncorrectly() True >>> vl = VoiceLeadingQuartet('C#4', 'D4', 'A2', 'D3', key='D') >>> vl.closesIncorrectly() True
- contraryMotion()¶
returns True if both voices move in opposite directions
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('G4') >>> m2 = note.Note('G4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.contraryMotion() #no motion, so oblique motion will give False False >>> n2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.contraryMotion() False >>> m2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.contraryMotion() False >>> m2 = note.Note('A5') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.contraryMotion() False >>> m2 = note.Note('C4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.contraryMotion() True
No documentation.
n.b. – this method finds ALL hidden intervals, not just those that are forbidden under traditional common practice counterpoint rules. Takes thisInterval, an Interval object.
>>> from music21 import * >>> n1 = note.Note('C4') >>> n2 = note.Note('G4') >>> m1 = note.Note('B4') >>> m2 = note.Note('D5') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.hiddenInterval(Interval('P5')) True >>> n1 = note.Note('E4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.hiddenInterval(Interval('P5')) False >>> m2.octave = 6 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.hiddenInterval(Interval('P5')) False
No documentation.
- improperResolution()¶
checks whether the voice-leading quartet resolves correctly according to standard counterpoint rules. If the first harmony is dissonant (d5, A4, or m7) it checks that these are correctly resolved. If the first harmony is consonant, True is returned.
The key parameter should be specified to check for motion in the bass from specific note degrees. Default key is C Major.
Diminished Fifth: in by contrary motion to a third, with 7 resolving up to 1 in the bass Augmented Fourth: out by contrary motion to a sixth, with chordal seventh resolving down to a third in the bass. Minor Seventh: In to a third with a leap form 5 to 1 in the bass
>>> from music21 import * >>> n1 = note.Note('B-4') >>> n2 = note.Note('A4') >>> m1 = note.Note('E4') >>> m2 = note.Note('F4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.improperResolution() #d5 True >>> n1 = note.Note('E5') >>> n2 = note.Note('F5') >>> m1 = note.Note('B-4') >>> m2 = note.Note('A4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.improperResolution() #A4 True>>> n1 = note.Note('B-4') >>> n2 = note.Note('A4') >>> m1 = note.Note('C4') >>> m2 = note.Note('F4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.improperResolution() #m7 True >>> n1 = note.Note('C4') >>> n2 = note.Note('D4') >>> m1 = note.Note('F4') >>> m2 = note.Note('G4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.improperResolution() #not dissonant, true returned False>>> vl = VoiceLeadingQuartet('B-4', 'A4', 'C2', 'F2') >>> vl.key = key.Key('F') >>> vl.improperResolution() #not dissonant, true returned False
- inwardContraryMotion()¶
Returns true if both voices move inward by contrary motion
>>> from music21 import * >>> n1 = note.Note('C5') >>> n2 = note.Note('B4') >>> m1 = note.Note('G4') >>> m2 = note.Note('A4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.inwardContraryMotion() True >>> vl.outwardContraryMotion() False
- leapNotSetWithStep()¶
returns true if there is a leap or skip in once voice then the other voice must be a step or unison. if neither part skips then False is returned. Returns False if the two voices skip thirds in contrary motion.
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('C5') >>> m1 = note.Note('B3') >>> m2 = note.Note('A3') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.leapNotSetWithStep() False >>> n1 = note.Note('G4') >>> n2 = note.Note('C5') >>> m1 = note.Note('B3') >>> m2 = note.Note('F3') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.leapNotSetWithStep() True>>> vl = VoiceLeadingQuartet('E', 'G', 'G', 'E') >>> vl.leapNotSetWithStep() False
- motionType()¶
>>> from music21 import * >>> n1 = note.Note('D4') >>> n2 = note.Note('E4') >>> m1 = note.Note('F4') >>> m2 = note.Note('B4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.motionType() 'Similar' >>> n1 = note.Note('A4') >>> n2 = note.Note('C5') >>> m1 = note.Note('D4') >>> m2 = note.Note('F4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.motionType() 'Parallel'
- noMotion()¶
Returns true if no voice moves at this “voice-leading” moment
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('D4') >>> m2 = note.Note('D4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.noMotion() True >>> n2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.noMotion() False
- obliqueMotion()¶
Returns true if one voice remains the same and another moves. I.e., noMotion must be False if obliqueMotion is True.
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('D4') >>> m2 = note.Note('D4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.obliqueMotion() False >>> n2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.obliqueMotion() True >>> m2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.obliqueMotion() False
- opensIncorrectly()¶
Returns true if the opening or second harmonic interval is PU, P8, or P5, to accommodate an anacrusis. also checks to see if opening establishes tonic or dominant harmony (uses identifyAsTonicOrDominant()
>>> from music21 import * >>> vl = VoiceLeadingQuartet('D','D','D','F#') >>> vl.key = 'D' >>> vl.opensIncorrectly() False >>> vl = VoiceLeadingQuartet('B','A','G#','A') >>> vl.key = 'A' >>> vl.opensIncorrectly() False >>> vl = VoiceLeadingQuartet('A', 'A', 'F#', 'D') >>> vl.key = 'A' >>> vl.opensIncorrectly() False >>> vl = VoiceLeadingQuartet('C#', 'C#', 'D', 'E') >>> vl.key = 'A' >>> vl.opensIncorrectly() True>>> vl = VoiceLeadingQuartet('B', 'B', 'A', 'A') >>> vl.key = 'C' >>> vl.opensIncorrectly() True
- outwardContraryMotion()¶
Returns true if both voices move outward by contrary motion
>>> from music21 import * >>> n1 = note.Note('D5') >>> n2 = note.Note('E5') >>> m1 = note.Note('G4') >>> m2 = note.Note('F4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.outwardContraryMotion() True >>> vl.inwardContraryMotion() False
- parallelFifth()¶
Returns true if the motion is a parallel Perfect Fifth (or antiparallel) or Octave duplication
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("G4"), Note("A4")).parallelFifth() True >>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("G5"), Note("A5")).parallelFifth() True >>> VoiceLeadingQuartet(Note("C4"), Note("D#4"), Note("G4"), Note("A4")).parallelFifth() False
- parallelInterval(thisInterval)¶
Returns true if there is a parallel motion or antiParallel motion of this type (thisInterval should be an Interval object)
>>> n11 = Note("C4") >>> n12a = Note("D4") # ascending 2nd >>> n12b = Note("D3") # descending 7th >>> n21 = Note("G4") >>> n22a = Note("A4") # ascending 2nd >>> n22b = Note("B4") # ascending 3rd >>> vlq1 = VoiceLeadingQuartet(n11, n12a, n21, n22a) >>> vlq1.parallelInterval(Interval("P5")) True >>> vlq1.parallelInterval(Interval("P8")) FalseAntiparallel fifths also are true
>>> vlq2 = VoiceLeadingQuartet(n11, n12b, n21, n22a) >>> vlq2.parallelInterval(Interval("P5")) TrueNon-parallel intervals are, of course, False >>> vlq3 = VoiceLeadingQuartet(n11, n12a, n21, n22b) >>> vlq3.parallelInterval(Interval(“P5”)) False
- parallelMotion(requiredInterval=None)¶
returns True if both voices move with the same interval or an octave duplicate of the interval. if requiredInterval is given then returns True only if the parallel interval is that simple interval.
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('G4') >>> m2 = note.Note('G4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.parallelMotion() #no motion, so oblique motion will give False False >>> n2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.parallelMotion() False >>> m2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.parallelMotion() True >>> vl.parallelMotion('P8') True >>> vl.parallelMotion('M6') False >>> m2 = note.Note('A5') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.parallelMotion() False
- parallelOctave()¶
Returns true if the motion is a parallel Perfect Octave
[ a concept so abhorrent we shudder to illustrate it with an example, but alas, we must ]
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C5"), Note("D5")).parallelOctave() True >>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C6"), Note("D6")).parallelOctave() True >>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelOctave() False
- parallelUnison()¶
Returns true if the motion is a parallel Perfect Unison (and not Perfect Octave, etc.)
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelUnison() True >>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C5"), Note("D5")).parallelUnison() False
- parallelUnisonOrOctave()¶
>>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C3"), Note("D3")).parallelUnisonOrOctave() True >>> VoiceLeadingQuartet(Note("C4"), Note("D4"), Note("C4"), Note("D4")).parallelUnisonOrOctave() True
- similarMotion()¶
Returns true if the two voices both move in the same direction. Parallel Motion will also return true, as it is a special case of similar motion. If there is no motion, returns False.
>>> from music21 import * >>> n1 = note.Note('G4') >>> n2 = note.Note('G4') >>> m1 = note.Note('G4') >>> m2 = note.Note('G4') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.similarMotion() False >>> n2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.similarMotion() False >>> m2.octave = 5 >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.similarMotion() True >>> m2 = note.Note('A5') >>> vl = VoiceLeadingQuartet(n1, n2, m1, m2) >>> vl.similarMotion() TrueMethods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: NNoteLinearSegment, Music21Object, JSONSerializer
An object consisting of three sequential notes
The middle tone in a ThreeNoteLinearSegment can be classified using methods enclosed in this class to identify it as types of embellishing tones. Further methods can be used on the entire stream to identify these as non-harmonic.
Accepts a sequence of strings, pitches, or notes.
>>> from music21 import *
>>> ex = voiceLeading.ThreeNoteLinearSegment('C#4','D4','E-4')
>>> ex.n1
<music21.note.Note C#>
>>> ex.n2
<music21.note.Note D>
>>> ex.n3
<music21.note.Note E->
>>> ex = voiceLeading.ThreeNoteLinearSegment(note.Note('A4'),note.Note('D4'),'F5')
>>> ex.n1
<music21.note.Note A>
>>> ex.n2
<music21.note.Note D>
>>> ex.n3
<music21.note.Note F>
>>> ex.iLeftToRight
<music21.interval.Interval m6>
>>> ex.iLeft
<music21.interval.Interval P-5>
>>> ex.iRight
<music21.interval.Interval m10>
if no octave specified, default octave of 4 is assumed
>>> ex2 = voiceLeading.ThreeNoteLinearSegment('a','b','c')
>>> ex2.n1
<music21.note.Note A>
>>> ex2.n1.pitch.defaultOctave
4
ThreeNoteLinearSegment attributes
ThreeNoteLinearSegment properties
- iLeft¶
No documentation.
- iLeftToRight¶
No documentation.
- iRight¶
No documentation.
- n1¶
No documentation.
- n2¶
No documentation.
- n3¶
No documentation.
Properties inherited from NNoteLinearSegment: melodicIntervals, noteList
Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
ThreeNoteLinearSegment methods
- color(color='red', noteList=[2])¶
color all the notes in noteList (1,2,3). Default is to color only the second note red
- couldBeChromaticNeighborTone()¶
returns true if and only if noteToAnalyze could be a chromatic neighbor tone
>>> from music21 import * >>> ThreeNoteLinearSegment('C3','D3','C3').couldBeChromaticNeighborTone() False >>> ThreeNoteLinearSegment('C3','D-3','C3').couldBeChromaticNeighborTone() True >>> ThreeNoteLinearSegment('C#3','D3','C#3').couldBeChromaticNeighborTone() True >>> ThreeNoteLinearSegment('C#3','D3','D-3').couldBeChromaticNeighborTone() False
- couldBeChromaticPassingTone()¶
A note could a chromatic passing tone (and therefore a passing tone in general) if the generic interval between the previous and the current is -2, 1, or 2; the generic interval between the current and next is -2, 1, 2; the two generic intervals multiply to -2 or 2 (if 4 then it’s a diatonic interval; if 1 then not a passing tone; i.e, C -> C# -> C## is not a chromatic passing tone); AND between each of the notes there is a chromatic interval of 1 or -1 and multiplied together it is 1. (i.e.: C -> D– -> D- is not a chromatic passing tone).
>>> from music21 import * >>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeChromaticPassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeChromaticPassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('B3','B#3','C#4').couldBeChromaticPassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('B3','D-4','C#4').couldBeChromaticPassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('B3','C##4','C#4').couldBeChromaticPassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('C#4','C4','C##4').couldBeChromaticPassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('D--4','C4','D-4').couldBeChromaticPassingTone() False
- couldBeDiatonicNeighborTone()¶
returns true if and only if noteToAnalyze could be a diatonic neighbor tone
>>> from music21 import * >>> ThreeNoteLinearSegment('C3','D3','C3').couldBeDiatonicNeighborTone() True >>> ThreeNoteLinearSegment('C3','C#3','C3').couldBeDiatonicNeighborTone() False >>> ThreeNoteLinearSegment('C3','D-3','C3').couldBeDiatonicNeighborTone() False
- couldBeDiatonicPassingTone()¶
A note could be a diatonic passing tone (and therefore a passing tone in general) if the generic interval between the previous and the current is 2 or -2; same for the next; and both move in the same direction (that is, the two intervals multiplied by each other are 4, not -4).
>>> from music21 import * >>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBeDiatonicPassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBeDiatonicPassingTone() True
- couldBeNeighborTone()¶
checks if noteToAnalyze could be a neighbor tone, either a diatonic neighbor tone or a chromatic neighbor tone. Does NOT check if tone is non harmonic
>>> from music21 import * >>> voiceLeading.ThreeNoteLinearSegment('E3','F3','E3').couldBeNeighborTone() True >>> voiceLeading.ThreeNoteLinearSegment('B-4','C5','B-4').couldBeNeighborTone() True >>> voiceLeading.ThreeNoteLinearSegment('B4','C5','B4').couldBeNeighborTone() True >>> voiceLeading.ThreeNoteLinearSegment('G4','F#4','G4').couldBeNeighborTone() True >>> voiceLeading.ThreeNoteLinearSegment('E-3','F3','E-4').couldBeNeighborTone() False >>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBeNeighborTone() False >>> voiceLeading.ThreeNoteLinearSegment('A3','C3','D3').couldBeNeighborTone() False
- couldBePassingTone()¶
checks if the two intervals are steps and if these steps are moving in the same direction. Returns true if the tone is identified as either a chromatic passing tone or a diatonic passing tone. Only major and minor diatonic passing tones are recognized (not pentatonic or scales beyond twelve-notes). Does NOT check if tone is non harmonic
Accepts pitch or note objects; method is dependent on octave information
>>> from music21 import * >>> voiceLeading.ThreeNoteLinearSegment('C#4','D4','E-4').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('C3','D3','E3').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('E-3','F3','G-3').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('C3','C3','C3').couldBePassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('A3','C3','D3').couldBePassingTone() FalseDirectionality must be maintained
>>> voiceLeading.ThreeNoteLinearSegment('B##3','C4','D--4').couldBePassingTone() FalseIf no octave is given then ._defaultOctave is used. This is generally octave 4
>>> voiceLeading.ThreeNoteLinearSegment('C','D','E').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('C4','D','E').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('C5','D','E').couldBePassingTone() FalseMethod returns true if either a chromatic passing tone or a diatonic passing tone is identified. Spelling of the pitch does matter!
>>> voiceLeading.ThreeNoteLinearSegment('B3','C4','B##3').couldBePassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('A##3','C4','E---4').couldBePassingTone() False >>> voiceLeading.ThreeNoteLinearSegment('B3','C4','D-4').couldBePassingTone() True >>> voiceLeading.ThreeNoteLinearSegment('B3','C4','C#4').couldBePassingTone() TrueMethods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: Music21Object, JSONSerializer
is instantiated by passing in a dictionary of the form {partNumber : [ music21Objects ] } Typically vertical slices are created by getVerticalSlices()
>>> from music21 import *
>>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('Cm')], 1: [note.Note('F2')]})
>>> vs1.getObjectsByClass(note.Note)
[<music21.note.Note A>, <music21.note.Note F>]
>>> vs1.getObjectsByPart(0, note.Note)
<music21.note.Note A>
VerticalSlice attributes
VerticalSlice properties
- lyric¶
sets each element on the vertical slice to have the passed in lyric
>>> from music21 import * >>> h = voiceLeading.VerticalSlice({1:note.Note('C'), 2:harmony.ChordSymbol('C')}) >>> h.lyric = 'vertical slice 1' >>> h.getStream().flat.getElementsByClass(note.Note)[0].lyric 'vertical slice 1'
- objects¶
return a list of all the music21 objects in the vertical slice
>>> from music21 import * >>> vs1 = VerticalSlice({0:[ harmony.ChordSymbol('C'), note.Note('A4'),], 1: [note.Note('C')]}) >>> vs1.objects [<music21.harmony.ChordSymbol C>, <music21.note.Note A>, <music21.note.Note C>]Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, priority, seconds
Properties inherited from JSONSerializer: json
VerticalSlice methods
- getObjectsByClass(classFilterList, partNums=None)¶
>>> from music21 import * >>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('C')], 1: [note.Note('C')], 2: [note.Note('B'), note.Note('F#')]}) >>> vs1.getObjectsByClass('Note') [<music21.note.Note A>, <music21.note.Note C>, <music21.note.Note B>, <music21.note.Note F#>] >>> >>> vs1.getObjectsByClass('Note', [1,2]) [<music21.note.Note C>, <music21.note.Note B>, <music21.note.Note F#>]
- getObjectsByPart(partNum, classFilterList=None)¶
return the list of music21 objects associated with that part number (if more than one). returns the single object if only one. Optionally specify which type of objects to return with classFilterList
>>> from music21 import * >>> vs1 = VerticalSlice({0:[note.Note('A4'), harmony.ChordSymbol('C')], 1: [note.Note('C')]}) >>> vs1.getObjectsByPart(0, classFilterList=['Harmony']) <music21.harmony.ChordSymbol C>
- getStream(streamVSCameFrom=None)¶
returns the stream representation of this vertical slice. Optionally pass in the full stream that this VS was extracted from, and correct key, meter, and time signatures will be included
>>> from music21 import * >>> vs1 = VerticalSlice({0:[ harmony.ChordSymbol('C'), note.Note('A4'),], 1: [note.Note('C')]}) >>> len(vs1.getStream().flat.getElementsByClass(note.Note)) 2 >>> len(vs1.getStream().flat.getElementsByClass('Harmony')) 1
- offset(leftAlign=True)¶
returns the overall offset of the vertical slice. Typically, this would just be the offset of each object in the vertical slice, and each object would have the same offset. However, if the duration of one object in the slice is different than the duration of another, and that other starts after the first, but the first is still sounding, then the offsets would be different. In this case, specify leftAlign=True to return the lowest valued-offset of all the objects in the vertical slice. If you prefer the offset of the right-most starting object, then specify leftAlign=False
>>> from music21 import * >>> s = stream.Score() >>> n1 = note.Note('A4', quarterLength=1.0) >>> s.append(n1) >>> n1.offset 0.0 >>> n2 = note.Note('F2', quarterLength =0.5) >>> s.append(n2) >>> n2.offset 1.0 >>> vs = VerticalSlice({0:n1, 1: n2}) >>> vs.getObjectsByClass(note.Note) [<music21.note.Note A>, <music21.note.Note F>] >>> vs.offset(leftAlign=True) 0.0 >>> vs.offset(leftAlign=False) 1.0Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: Music21Object, JSONSerializer
a collection of n number of vertical slices
Inherits from: NObjectLinearSegment, Music21Object, JSONSerializer
NChordLinearSegment attributes
NChordLinearSegment properties
- chordList¶
>>> from music21 import * >>> n = NChordLinearSegment([harmony.ChordSymbol('Am'), harmony.ChordSymbol('F7'), harmony.ChordSymbol('G9')]) >>> n.chordList [<music21.harmony.ChordSymbol Am>, <music21.harmony.ChordSymbol F7>, <music21.harmony.ChordSymbol G9>]Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
NChordLinearSegment methods
Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: Music21Object, JSONSerializer
a list of n notes strung together in a sequence noteList = [note1, note2, note3, ..., note-n ] Once this object is created with a noteList, the noteList may not be changed
>>> from music21 import *
>>> n = NNoteLinearSegment(['A', 'C', 'D'])
>>> n.noteList
[<music21.note.Note A>, <music21.note.Note C>, <music21.note.Note D>]
NNoteLinearSegment attributes
NNoteLinearSegment properties
- melodicIntervals¶
calculates the melodic intervals and returns them as a list, with the interval at 0 being the interval between the first and second note.
>>> from music21 import * >>> n = NNoteLinearSegment([note.Note('A'), note.Note('B'), note.Note('C'), note.Note('D')]) >>> n.melodicIntervals [<music21.interval.Interval M2>, <music21.interval.Interval M-7>, <music21.interval.Interval M2>]
- noteList¶
>>> from music21 import * >>> n = NNoteLinearSegment(['A', 'B5', 'C', 'F#']) >>> n.noteList [<music21.note.Note A>, <music21.note.Note B>, <music21.note.Note C>, <music21.note.Note F#>]Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
NNoteLinearSegment methods
Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: Music21Object, JSONSerializer
Inherits from: NChordLinearSegment, NObjectLinearSegment, Music21Object, JSONSerializer
TwoChordLinearSegment attributes
TwoChordLinearSegment properties
Properties inherited from NChordLinearSegment: chordList
Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
TwoChordLinearSegment methods
- bassInterval(type='chromatic')¶
>>> from music21 import * >>> h = voiceLeading.TwoChordLinearSegment(harmony.ChordSymbol('C/E'), harmony.ChordSymbol('G')) >>> h.bassInterval() <music21.interval.ChromaticInterval 3>
- rootInterval(type='chromatic')¶
>>> from music21 import * >>> h = voiceLeading.TwoChordLinearSegment([harmony.ChordSymbol('C'), harmony.ChordSymbol('G')]) >>> h.rootInterval() <music21.interval.ChromaticInterval 7>Methods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()
Inherits from: VerticalSliceNTuplet, Music21Object, JSONSerializer
a collection of three vertical slices
VerticalSliceTriplet attributes
VerticalSliceTriplet properties
Properties inherited from Music21Object: activeSite, beat, beatDuration, beatStr, beatStrength, classes, derivationHierarchy, duration, measureNumber, offset, priority, seconds
Properties inherited from JSONSerializer: json
VerticalSliceTriplet methods
- hasNeighborTone(partNumToIdentify, unaccentedOnly=False)¶
partNum is the part (starting with 0) to identify the passing tone for use on 3 vertical slices (3tuplet)
>>> from music21 import * >>> vs1 = VerticalSlice({0:note.Note('E-4'), 1: note.Note('C3')}) >>> vs2 = VerticalSlice({0:note.Note('E-4'), 1: note.Note('B2')}) >>> vs3 = VerticalSlice({0:note.Note('C5'), 1: note.Note('C3')}) >>> tbtm = VerticalSliceTriplet([vs1, vs2, vs3]) >>> tbtm.hasNeighborTone(1) True
- hasPassingTone(partNumToIdentify, unaccentedOnly=False)¶
partNum is the part (starting with 0) to identify the passing tone for use on 3 vertical slices (3Tuplet)
>>> from music21 import * >>> vs1 = VerticalSlice({0:note.Note('A4'), 1: note.Note('F2')}) >>> vs2 = VerticalSlice({0:note.Note('B-4'), 1: note.Note('F2')}) >>> vs3 = VerticalSlice({0:note.Note('C5'), 1: note.Note('E2')}) >>> tbtm = VerticalSliceTriplet([vs1, vs2, vs3]) >>> tbtm.hasPassingTone(0) TrueMethods inherited from Music21Object: searchActiveSiteByAttr(), getContextAttr(), setContextAttr(), addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getCommonSiteIds(), getCommonSites(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), hasSpannerSite(), isClassOrSubclass(), mergeAttributes(), next(), previous(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), removeNonContainedLocations(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()
Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()