Table Of Contents

Previous topic

music21.sieve

Next topic

music21.tempo

music21.stream

The Stream and its subclasses, a subclass of the Music21Object, is the fundamental container of offset-positioned notation and musical elements in music21. Common Stream subclasses, such as the Measure and Score objects, are defined in this module.

Stream

Inherits from: Music21Object, JSONSerializer

class music21.stream.Stream(givenElements=None, *args, **keywords)

This is the fundamental container for Music21Objects; objects may be ordered and/or placed in time based on offsets from the start of this container’s offset. Like the base class, Music21Object, Streams have offsets, priority, id, and groups they also have an elements attribute which returns a list of elements; The Stream has a duration that is usually the release time of the chronologically last element in the Stream (that is, the highest onset plus duration of any element in the Stream). However, it can either explicitly set in which case we say that the duration is unlinked Streams may be embedded within other Streams.

Stream attributes

isFlat
Boolean describing whether this Stream contains embedded sub-Streams or Stream subclasses (not flat).
autoSort
Boolean describing whether the Stream is automatically sorted by offset whenever necessary.
isSorted
Boolean describing whether the Stream is sorted or not.
flattenedRepresentationOf
When this flat Stream is derived from another non-flat stream, a reference to the source Stream is stored here.

Attributes without Documentation: isMeasure

Attributes inherited from Music21Object: classSortOrder, id, groups

Stream properties

notes

The notes property of a Stream returns a new Stream object that consists only of the notes (including Note, Chord, Rest, etc.) found in the stream.

>>> from music21 import *
>>> s1 = Stream()
>>> k1 = key.KeySignature(0) # key of C
>>> n1 = note.Note('B')
>>> c1 = chord.Chord(['A', 'B-'])
>>> s1.append([k1, n1, c1])
>>> s1.show('text')
{0.0} <music21.key.KeySignature of no sharps or flats>
{0.0} <music21.note.Note B>
{1.0} <music21.chord.Chord A B->
>>> notes1 = s1.notes
>>> notes1.show('text')
{0.0} <music21.note.Note B>
{1.0} <music21.chord.Chord A B->
pitches

Return all Pitch objects found in any element in the Stream as a Python List. Elements such as Streams, and Chords will have their Pitch objects accumulated as well. For that reason, a flat representation may not be required. Pitch objects are returned in a List, not a Stream. This usage differs from the notes property, but makes sense since Pitch objects are usually durationless. (That’s the main difference between them and notes)

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> voiceOnePitches = a[0].pitches
>>> len(voiceOnePitches)
25
>>> voiceOnePitches[0:10]
[B4, D5, B4, B4, B4, B4, C5, B4, A4, A4]
Note that the pitches returned above are
objects, not text:
>>> voiceOnePitches[0].octave
4
Since pitches are found from internal objects,
flattening the stream is not required:
>>> len(a.pitches)
104
Pitch objects are also retrieved when stored directly on a Stream.
>>> from music21 import pitch
>>> pitch1 = pitch.Pitch()
>>> st1 = Stream()
>>> st1.append(pitch1)
>>> foundPitches = st1.pitches
>>> len(foundPitches)
1
>>> foundPitches[0] is pitch1
True
beat
No documentation.
beatDuration
No documentation.
beatStr
No documentation.
beatStrength
No documentation.
duration

Returns the total duration of the Stream, from the beginning of the stream until the end of the final element. May be set independently by supplying a Duration object.

>>> a = Stream()
>>> q = note.QuarterNote()
>>> a.repeatInsert(q, [0,1,2,3])
>>> a.highestOffset
3.0
>>> a.highestTime
4.0
>>> a.duration.quarterLength
4.0
>>> # Advanced usage: overriding the duration
>>> newDuration = duration.Duration("half")
>>> newDuration.quarterLength
2.0
>>> a.duration = newDuration
>>> a.duration.quarterLength
2.0
>>> a.highestTime # unchanged
4.0
elements
The low-level storage list of all Streams. Directly getting, setting, and manipulating this list is reserved for advanced usage.
flat
Return a new Stream that has all sub-container flattened within.
highestOffset

Get start time of element with the highest offset in the Stream. Note the difference between this property and highestTime which gets the end time of the highestOffset

>>> stream1 = Stream()
>>> for offset in [0, 4, 8]:
...     n = note.WholeNote('G#')
...     stream1.insert(offset, n)
>>> stream1.highestOffset
8.0
>>> stream1.highestTime
12.0
highestTime

Returns the maximum of all Element offsets plus their Duration in quarter lengths. This value usually represents the last “release” in the Stream. Stream.duration is usually equal to the highestTime expressed as a Duration object, but it can be set separately for advanced operations. Example insert a dotted quarter at positions 0, 1, 2, 3, 4:

>>> n = note.Note('A-')
>>> n.quarterLength = 3
>>> p1 = Stream()
>>> p1.repeatInsert(n, [0, 1, 2, 3, 4])
>>> p1.highestTime # 4 + 3
7.0
isGapless
No documentation.
lily

returns or sets the lilypond output for the Stream. Note that (for now at least), setting the Lilypond output for a Stream does not change the stream itself. It’s just a way of overriding what is printed when .lily is called.

>>> from music21 import *
>>> s1 = stream.Stream()
>>> s1.append(clef.BassClef())
>>> s1.append(meter.TimeSignature("3/4"))
>>> k1 = key.KeySignature(5)
>>> k1.mode = 'minor'
>>> s1.append(k1)
>>> s1.append(note.Note("B-3"))   # quarter note
>>> s1.append(note.HalfNote("C#2"))
>>> s1.lily
{ \clef "bass"  \time 3/4  \key gis \minor bes4 cis,2  }
lowestOffset

Get the start time of the Element with the lowest offset in the Stream.

>>> stream1 = Stream()
>>> for x in range(3,5):
...     n = note.Note('G#')
...     stream1.insert(x, n)
...
>>> stream1.lowestOffset
3.0
If the Stream is empty, then the lowest offset is 0.0:
>>> stream2 = Stream()
>>> stream2.lowestOffset
0.0
metadata

Get or set the Metadata object found at offset zero for this Stream.

>>> s = Stream()
>>> s.metadata = metadata.Metadata()
>>> s.metadata.composer = 'frank'
>>> s.metadata.composer
'frank'
midiFile
Get or set a music21.midi.base.MidiFile object.
midiTracks

Get or set this Stream from a list of music21.midi.base.MidiTracks objects.

>>> from music21 import *
>>> s = stream.Stream()
>>> n = note.Note('g#3')
>>> n.quarterLength = .5
>>> s.repeatAppend(n, 6)
>>> len(s.midiTracks[0].events)
28
musicxml
Return a complete MusicXML reprsentatoin as a string.
mx

Create and return a musicxml Score object.

>>> n1 = note.Note()
>>> measure1 = Measure()
>>> measure1.insert(n1)
>>> str1 = Stream()
>>> str1.insert(measure1)
>>> mxScore = str1.mx
offsetMap

returns a list where each element is a dictionary consisting of the ‘offset’ of each element in a stream, the ‘endTime’ (that is, the offset plus the duration) and the ‘element’ itself. Also contains a ‘voiceIndex’ entry which contains the voice number of the element, or None if there are no voices.

>>> from music21 import *
>>> n1 = note.QuarterNote()
>>> c1 = clef.AltoClef()
>>> n2 = note.HalfNote()
>>> s1 = stream.Stream()
>>> s1.append([n1, c1, n2])
>>> om = s1.offsetMap
>>> om[2]['offset']
1.0
>>> om[2]['endTime']
3.0
>>> om[2]['element'] is n2
True
>>> om[2]['voiceIndex']
semiFlat
Returns a flat-like Stream representation. Stream sub-classed containers, such as Measure or Part, are retained in the output Stream, but positioned at their relative offset.
sorted

Returns a new Stream where all the elements are sorted according to offset time, then priority, then classSortOrder (so that, for instance, a Clef at offset 0 appears before a Note at offset 0) if this Stream is not flat, then only the highest elements are sorted. To sort all, run myStream.flat.sorted For instance, here is an unsorted Stream

>>> from music21 import *
>>> s = stream.Stream()
>>> s.autoSort = False # if true, sorting is automatic
>>> s.insert(1, note.Note("D"))
>>> s.insert(0, note.Note("C"))
>>> s.show('text')
{1.0} <music21.note.Note D>
{0.0} <music21.note.Note C>

But a sorted version of the Stream puts the C first:

>>> s.sorted.show('text')
{0.0} <music21.note.Note C>
{1.0} <music21.note.Note D>

While the original stream remains unsorted:

>>> s.show('text')
{1.0} <music21.note.Note D>
{0.0} <music21.note.Note C>
spanners

Return all Spanner objects in a Stream or Stream subclass.

>>> from music21 import *
>>> s = stream.Stream()
>>> s.insert(0, spanner.Slur())
>>> s.insert(0, spanner.Slur())
>>> len(s.spanners)
2
voices

Return all Voices objects in a Stream or Stream subclass.

>>> from music21 import *
>>> s = stream.Stream()
>>> s.insert(0, stream.Voice())
>>> s.insert(0, stream.Voice())
>>> len(s.voices)
2

Properties inherited from Music21Object: activeSite, classes, measureNumberLocal, offset, priority

Properties inherited from JSONSerializer: json

Stream methods

append(others)

Add Music21Objects (including other Streams) to the Stream (or multiple if passed a list) with offset equal to the highestTime (that is the latest “release” of an object), that is, directly after the last element ends. if the objects are not Music21Objects, they are wrapped in ElementWrappers runs fast for multiple addition and will preserve isSorted if True

>>> a = Stream()
>>> notes = []
>>> for x in range(0,3):
...     n = note.Note('G#')
...     n.duration.quarterLength = 3
...     notes.append(n)
>>> a.append(notes[0])
>>> a.highestOffset, a.highestTime
(0.0, 3.0)
>>> a.append(notes[1])
>>> a.highestOffset, a.highestTime
(3.0, 6.0)
>>> a.append(notes[2])
>>> a.highestOffset, a.highestTime
(6.0, 9.0)
>>> notes2 = []
>>> # since notes are not embedded in Elements here, their offset
>>> # changes when added to a stream!
>>> for x in range(0,3):
...     n = note.Note("A-")
...     n.duration.quarterLength = 3
...     n.offset = 0
...     notes2.append(n)
>>> a.append(notes2) # add em all again
>>> a.highestOffset, a.highestTime
(15.0, 18.0)
>>> a.isSequence()
True
Add a note that already has an offset set -- does nothing different!
>>> n3 = note.Note("B-")
>>> n3.offset = 1
>>> n3.duration.quarterLength = 3
>>> a.append(n3)
>>> a.highestOffset, a.highestTime
(18.0, 21.0)
>>> n3.getOffsetBySite(a)
18.0
insert(offsetOrItemOrList, itemOrNone=None, ignoreSort=False, setActiveSite=True)

Inserts an item(s) at the given offset(s). If ignoreSort is True then the inserting does not change whether the Stream is sorted or not (much faster if you’re going to be inserting dozens of items that don’t change the sort status) The setActiveSite parameter should nearly always be True; only for advanced Stream manipulation would you not change the activeSite after inserting an element. Has three forms: in the two argument form, inserts an element at the given offset:

>>> st1 = Stream()
>>> st1.insert(32, note.Note("B-"))
>>> st1._getHighestOffset()
32.0
In the single argument form with an object, inserts the element at its stored offset:
>>> n1 = note.Note("C#")
>>> n1.offset = 30.0
>>> st1 = Stream()
>>> st1.insert(n1)
>>> st2 = Stream()
>>> st2.insert(40.0, n1)
>>> n1.getOffsetBySite(st1)
30.0
In single argument form with a list, the list should contain pairs that alternate
offsets and items; the method then, obviously, inserts the items
at the specified offsets:
>>> n1 = note.Note("G")
>>> n2 = note.Note("F#")
>>> st3 = Stream()
>>> st3.insert([1.0, n1, 2.0, n2])
>>> n1.getOffsetBySite(st3)
1.0
>>> n2.getOffsetBySite(st3)
2.0
>>> len(st3)
2
insertAndShift(offsetOrItemOrList, itemOrNone=None)

Insert an item at a specified or native offset, and shit any elements found in the Stream to start at the end of the added elements. This presently does not shift elements that have durations that extend into the lowest insert position.

>>> st1 = Stream()
>>> st1.insertAndShift(32, note.Note("B-"))
>>> st1.highestOffset
32.0
>>> st1.insertAndShift(32, note.Note("B-"))
>>> st1.highestOffset
33.0
In the single argument form with an object, inserts the element at its stored offset:
>>> n1 = note.Note("C#")
>>> n1.offset = 30.0
>>> n2 = note.Note("C#")
>>> n2.offset = 30.0
>>> st1 = Stream()
>>> st1.insertAndShift(n1)
>>> st1.insertAndShift(n2) # should shift offset of n1
>>> n1.getOffsetBySite(st1)
31.0
>>> n2.getOffsetBySite(st1)
30.0
>>> st2 = Stream()
>>> st2.insertAndShift(40.0, n1)
>>> st2.insertAndShift(40.0, n2)
>>> n1.getOffsetBySite(st2)
41.0
In single argument form with a list, the list should contain pairs that alternate
offsets and items; the method then, obviously, inserts the items
at the specified offsets:
>>> n1 = note.Note("G")
>>> n2 = note.Note("F#")
>>> st3 = Stream()
>>> st3.insertAndShift([1.0, n1, 2.0, n2])
>>> n1.getOffsetBySite(st3)
1.0
>>> n2.getOffsetBySite(st3)
2.0
>>> len(st3)
2
transpose(value, inPlace=False, classFilterList=[, 'Note', 'Chord'])

Transpose all specified classes in the Stream by the user-provided value. If the value is an integer, the transposition is treated in half steps. If the value is a string, any Interval string specification can be provided. returns a new Stream by default, but if the optional “inPlace” key is set to True then it modifies pitches in place.

>>> aInterval = interval.Interval('d5')
>>> from music21 import corpus
>>> aStream = corpus.parseWork('bach/bwv324.xml')
>>> part = aStream[0]
>>> aStream[0].pitches[:10]
[B4, D5, B4, B4, B4, B4, C5, B4, A4, A4]
>>> bStream = aStream[0].flat.transpose('d5')
>>> bStream.pitches[:10]
[F5, A-5, F5, F5, F5, F5, G-5, F5, E-5, E-5]
>>> aStream[0].pitches[:10]
[B4, D5, B4, B4, B4, B4, C5, B4, A4, A4]
>>> cStream = bStream.flat.transpose('a4')
>>> cStream.pitches[:10]
[B5, D6, B5, B5, B5, B5, C6, B5, A5, A5]
>>> cStream.flat.transpose(aInterval, inPlace=True)
>>> cStream.pitches[:10]
[F6, A-6, F6, F6, F6, F6, G-6, F6, E-6, E-6]
augmentOrDiminish(scalar, inPlace=False)

Scale this Stream by a provided numerical scalar. A scalar of .5 is half the durations and relative offset positions; a scalar of 2 is twice the durations and relative offset positions. If inPlace is True, the alteration will be made to the calling object. Otherwise, a new Stream is returned.

>>> from music21 import *
>>> s = stream.Stream()
>>> n = note.Note()
>>> s.repeatAppend(n, 10)
>>> s.highestOffset, s.highestTime
(9.0, 10.0)
>>> s1 = s.augmentOrDiminish(2)
>>> s1.highestOffset, s1.highestTime
(18.0, 20.0)
>>> s1 = s.augmentOrDiminish(.5)
>>> s1.highestOffset, s1.highestTime
(4.5, 5.0)
scaleOffsets(scalar, anchorZero=lowest, anchorZeroRecurse=None, inPlace=True)

Scale all offsets by a provided scalar. Durations are not altered. To augment or diminish a Stream, see the augmentOrDiminish() method. The anchorZero parameter determines if and/or where the zero offset is established for the set of offsets in this Stream before processing. Offsets are shifted to make either the lower or upper values the new zero; then offsets are scaled; then the shifts are removed. Accepted values are None (no offset shifting), “lowest”, or “highest”. The anchorZeroRecurse parameter determines the anchorZero for all embedded Streams, and Streams embedded within those Streams. If the lowest offset in an embedded Stream is non-zero, setting this value to None will a the space between the start of that Stream and the first element to be scaled. If the lowest offset in an embedded Stream is non-zero, setting this value to ‘lowest’ will not alter the space between the start of that Stream and the first element to be scaled. To shift all the elements in a Stream, see the shiftElements() method.

>>> from music21 import note
>>> n = note.Note()
>>> n.quarterLength = 2
>>> s = Stream()
>>> s.repeatAppend(n, 20)
scaleDurations(scalar, inPlace=True)
Scale all durations by a provided scalar. Offsets are not modified. To augment or diminish a Stream, see the augmentOrDiminish() method.
addGroupForElements(group, classFilter=None)

Add the group to the groups attribute of all elements. if classFilter is set then only those elements whose objects belong to a certain class (or for Streams which are themselves of a certain class) are set.

>>> a = Stream()
>>> a.repeatAppend(note.Note('A-'), 30)
>>> a.repeatAppend(note.Rest(), 30)
>>> a.addGroupForElements('flute')
>>> a[0].groups
['flute']
>>> a.addGroupForElements('quietTime', note.Rest)
>>> a[0].groups
['flute']
>>> a[50].groups
['flute', 'quietTime']
>>> a[1].groups.append('quietTime') # set one note to it
>>> a[1].step = "B"
>>> b = a.getElementsByGroup('quietTime')
>>> len(b)
31
>>> c = b.getElementsByClass(note.Note)
>>> len(c)
1
>>> c[0].name
'B-'
allPlayingWhileSounding(el, elStream=None, requireClass=False)
Returns a new Stream of elements in this stream that sound at the same time as el, an element presumably in another Stream. The offset of this new Stream is set to el’s offset, while the offset of elements within the Stream are adjusted relative to their position with respect to the start of el. Thus, a note that is sounding already when el begins would have a negative offset. The duration of otherStream is forced to be the length of el – thus a note sustained after el ends may have a release time beyond that of the duration of the Stream. as above, elStream is an optional Stream to look up el’s offset in.
analyze(*args, **keywords)

Given an analysis method, return an analysis on this Stream. For details on arguments, see analyzeStream(). Available plots include the following: ambitus – runs Ambitus key – KrumhanslSchmuckler

>>> from music21 import *
>>> s = corpus.parseWork('bach/bwv66.6')
>>> s.analyze('ambitus')
<music21.interval.Interval m21>
>>> s.analyze('key')
(F#, 'minor', 0.81547089257624916)
attachIntervalsBetweenStreams(cmpStream)

For each element in self, creates an interval.Interval object in the element’s editorial that is the interval between it and the element in cmpStream that is sounding at the moment the element in srcStream is attacked. remember if comparing two streams with measures, etc., to run: stream1.flat.attachIntervvalsBetweenStreams(stream2.flat) example usage:

>>> from music21 import *
>>> s1 = converter.parse('C4 d8 e f# g A2', '5/4')
>>> s2 = converter.parse('g4 e8 d c4   a2', '5/4')
>>> s1.attachIntervalsBetweenStreams(s2)
>>> for n in s1.notes:
...     if "Rest" in n.classes: continue  # safety check
...     if n.editorial.harmonicInterval is None: continue # if other voice had a rest...
...     print n.editorial.harmonicInterval.directedName
P12
M2
M-2
A-4
P-5
P8
attributeCount(classFilterList, attrName=quarterLength)

Return a dictionary of attribute usage for one or more classes provided in a the classFilterList list and having the attribute specified by attrName.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> a[0].flat.attributeCount(note.Note, 'quarterLength')
{1.0: 12, 2.0: 11, 4.0: 2}
bestClef(allowTreble8vb=False)

Returns the clef that is the best fit for notes and chords found in this Stream. This does not automatically get a flat representation of the Stream.

>>> a = Stream()
>>> for x in range(30):
...    n = note.Note()
...    n.midi = random.choice(range(60,72))
...    a.insert(n)
>>> b = a.bestClef()
>>> b.line
2
>>> b.sign
'G'
>>> c = Stream()
>>> for x in range(30):
...    n = note.Note()
...    n.midi = random.choice(range(35,55))
...    c.insert(n)
>>> d = c.bestClef()
>>> d.line
4
>>> d.sign
'F'
explode()
Create a multi-part extraction from a single polyphonic Part.
extendDuration(objName, inPlace=True)

Given a Stream and an object class name, go through the Stream and find each instance of the desired object. The time between adjacent objects is then assigned to the duration of each object. The last duration of the last object is assigned to extend to the end of the Stream. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.

>>> import music21.dynamics
>>> stream1 = Stream()
>>> n = note.QuarterNote()
>>> n.duration.quarterLength
1.0
>>> stream1.repeatInsert(n, [0, 10, 20, 30, 40])
>>> dyn = music21.dynamics.Dynamic('ff')
>>> stream1.insert(15, dyn)
>>> sort1 = stream1.sorted
>>> sort1[-1].offset # offset of last element
40.0
>>> sort1.duration.quarterLength # total duration
41.0
>>> len(sort1)
6
>>> stream2 = sort1.flat.extendDuration(note.GeneralNote)
>>> len(stream2)
6
>>> stream2[0].duration.quarterLength
10.0
>>> stream2[1].duration.quarterLength # all note durs are 10
10.0
>>> stream2[-1].duration.quarterLength # or extend to end of stream
1.0
>>> stream2.duration.quarterLength
41.0
>>> stream2[-1].offset
40.0
externalize()
Assuming there is a container in this Stream (like a Voice), remove the container and place all contents in the Stream.
extractContext(searchElement, before=4.0, after=4.0, maxBefore=None, maxAfter=None, forceOutputClass=None)

Extracts elements around the given element within (before) quarter notes and (after) quarter notes (default 4), and returns a new Stream.

>>> from music21 import note
>>> qn = note.QuarterNote()
>>> qtrStream = Stream()
>>> qtrStream.repeatInsert(qn, [0, 1, 2, 3, 4, 5])
>>> hn = note.HalfNote()
>>> hn.name = "B-"
>>> qtrStream.append(hn)
>>> qtrStream.repeatInsert(qn, [8, 9, 10, 11])
>>> hnStream = qtrStream.extractContext(hn, 1.0, 1.0)
>>> hnStream._reprText()
'{5.0} <music21.note.Note C>\n{6.0} <music21.note.Note B->\n{8.0} <music21.note.Note C>'
findConsecutiveNotes(skipRests=False, skipChords=False, skipUnisons=False, skipOctaves=False, skipGaps=False, getOverlaps=False, noNone=False, **keywords)

Returns a list of consecutive pitched Notes in a Stream. A single “None” is placed in the list at any point there is a discontinuity (such as if there is a rest between two pitches).

How to determine consecutive pitches is a little tricky and there are many options. skipUnison uses the midi-note value (.ps) to determine unisons, so enharmonic transitions (F# -> Gb) are also skipped if skipUnisons is true. We believe that this is the most common usage. However, because of this, you cannot completely be sure that the x.findConsecutiveNotes() - x.findConsecutiveNotes(skipUnisons = True) will give you the number of P1s in the piece, because there could be d2’s in there as well. See Test.testFindConsecutiveNotes() for usage details.

findGaps(minimumQuarterLength=0.001)
Returns either (1) a Stream containing Elements (that wrap the None object) whose offsets and durations are the length of gaps in the Stream or (2) None if there are no gaps. N.B. there may be gaps in the flattened representation of the stream but not in the unflattened. Hence why “isSequence” calls self.flat.isGapless
getClefs(searchActiveSite=False, searchContext=True, returnDefault=True)

Collect all Clef objects in this Stream in a new Stream. Optionally search the activeSite Stream and/or contexts. If no Clef objects are defined, get a default using bestClef()

>>> from music21 import clef
>>> a = Stream()
>>> b = clef.AltoClef()
>>> a.insert(0, b)
>>> a.repeatInsert(note.Note("C#"), range(10))
>>> c = a.getClefs()
>>> len(c) == 1
True
getElementAfterElement(element, classList=None)

given an element, get the next element. If classList is specified, check to make sure that the element is an instance of the class list

>>> st1 = Stream()
>>> n1 = note.Note()
>>> n2 = note.Note()
>>> r3 = note.Rest()
>>> st1.append([n1, n2, r3])
>>> t2 = st1.getElementAfterElement(n1)
>>> t2 is n2
True
>>> t3 = st1.getElementAfterElement(t2)
>>> t3 is r3
True
>>> t4 = st1.getElementAfterElement(t3)
>>> t4
>>> st1.getElementAfterElement("hi")
Traceback (most recent call last):
StreamException: ...
>>> t5 = st1.getElementAfterElement(n1, [note.Rest])
>>> t5 is r3
True
>>> t6 = st1.getElementAfterElement(n1, [note.Rest, note.Note])
>>> t6 is n2
True
getElementAfterOffset(offset, classList=None)
Get element after a provided offset
getElementAtOrAfter(offset, classList=None)
Given an offset, find the element at this offset, or with the offset greater than and nearest to.
getElementAtOrBefore(offset, classList=None)

Given an offset, find the element at this offset, or with the offset less than and nearest to. Return one element or None if no elements are at or preceded by this offset.

>>> import music21
>>> stream1 = music21.stream.Stream()
>>> x = music21.note.Note('D4')
>>> x.id = 'x'
>>> y = music21.note.Note('E4')
>>> y.id = 'y'
>>> z = music21.note.Rest()
>>> z.id = 'z'
>>> stream1.insert(20, x)
>>> stream1.insert(10, y)
>>> stream1.insert( 0, z)
>>> b = stream1.getElementAtOrBefore(21)
>>> b.offset, b.id
(20.0, 'x')
>>> b = stream1.getElementAtOrBefore(19)
>>> b.offset, b.id
(10.0, 'y')
>>> b = stream1.getElementAtOrBefore(0)
>>> b.offset, b.id
(0.0, 'z')
>>> b = stream1.getElementAtOrBefore(0.1)
>>> b.offset, b.id
(0.0, 'z')

You can give a list of acceptable classes to return, and non-matching elements will be ignored

>>> c = stream1.getElementAtOrBefore(100, [music21.clef.TrebleClef, music21.note.Rest])
>>> c.offset, c.id
(0.0, 'z')

Getting an object via getElementAtOrBefore sets the activeSite for that object to the Stream, and thus sets its offset

>>> stream2 = music21.stream.Stream()
>>> stream2.insert(100.5, x)
>>> x.offset
100.5
>>> d = stream1.getElementAtOrBefore(20)
>>> d is x
True
>>> x.activeSite is stream1
True
>>> x.offset
20.0
getElementBeforeElement(element, classList=None)
given an element, get the element before
getElementBeforeOffset(offset, classList=None)
Get element before a provided offset
getElementById(id, classFilter=None)

Returns the first encountered element for a given id. Return None if no match

>>> import music21
>>> e = 'test'
>>> a = music21.stream.Stream()
>>> ew = music21.ElementWrapper(e)
>>> a.insert(0, ew)
>>> a[0].id = 'green'
>>> None == a.getElementById(3)
True
>>> a.getElementById('green').id
'green'
>>> a.getElementById('Green').id  # case does not matter
'green'

Getting an element by getElementById changes its activeSite

>>> b = music21.stream.Stream()
>>> b.append(ew)
>>> ew.activeSite is b
True
>>> ew2 = a.getElementById('green')
>>> ew2 is ew
True
>>> ew2.activeSite is a
True
>>> ew.activeSite is a
True
getElementByObjectId(objId)
Low-level tool to get an element based only on the object id. This does not yet handle ElementWrapper objects
getElementsByClass(classFilterList, returnStreamSubClass=True)

Return a list of all Elements that match one or more classes in the classFilterList. A single class can be provided to the classFilterList parameter.

>>> from music21 import *
>>> a = stream.Score()
>>> a.repeatInsert(note.Rest(), range(10))
>>> for x in range(4):
...     n = note.Note('G#')
...     n.offset = x * 3
...     a.insert(n)
>>> found = a.getElementsByClass(note.Note)
>>> len(found)
4
>>> found[0].pitch.accidental.name
'sharp'
>>> b = stream.Stream()
>>> b.repeatInsert(note.Rest(), range(15))
>>> a.insert(b)
>>> # here, it gets elements from within a stream
>>> # this probably should not do this, as it is one layer lower
>>> found = a.getElementsByClass(note.Rest)
>>> len(found)
10
>>> found = a.flat.getElementsByClass(note.Rest)
>>> len(found)
25
>>> found.__class__.__name__
'Score'
getElementsByGroup(groupFilterList)
>>> from music21 import note
>>> n1 = note.Note("C")
>>> n1.groups.append('trombone')
>>> n2 = note.Note("D")
>>> n2.groups.append('trombone')
>>> n2.groups.append('tuba')
>>> n3 = note.Note("E")
>>> n3.groups.append('tuba')
>>> s1 = Stream()
>>> s1.append(n1)
>>> s1.append(n2)
>>> s1.append(n3)
>>> tboneSubStream = s1.getElementsByGroup("trombone")
>>> for thisNote in tboneSubStream:
...     print(thisNote.name)
C
D
>>> tubaSubStream = s1.getElementsByGroup("tuba")
>>> for thisNote in tubaSubStream:
...     print(thisNote.name)
D
E
getElementsByOffset(offsetStart, offsetEnd=None, includeEndBoundary=True, mustFinishInSpan=False, mustBeginInSpan=True)

Return a Stream of all Elements that are found at a certain offset or within a certain offset time range, specified as start and stop values. If mustFinishInSpan is True then an event that begins between offsetStart and offsetEnd but which ends after offsetEnd will not be included. For instance, a half note at offset 2.0 will be found in. The includeEndBoundary option determines if an element begun just at offsetEnd should be included. Setting includeEndBoundary to False at the same time as mustFinishInSpan is set to True is probably NOT what you ever want to do. Setting mustBeginInSpan to False is a good way of finding

_images/getElementsByOffset.png
>>> st1 = Stream()
>>> n0 = note.Note("C")
>>> n0.duration.type = "half"
>>> n0.offset = 0
>>> st1.insert(n0)
>>> n2 = note.Note("D")
>>> n2.duration.type = "half"
>>> n2.offset = 2
>>> st1.insert(n2)
>>> out1 = st1.getElementsByOffset(2)
>>> len(out1)
1
>>> out1[0].step
'D'
>>> out2 = st1.getElementsByOffset(1, 3)
>>> len(out2)
1
>>> out2[0].step
'D'
>>> out3 = st1.getElementsByOffset(1, 3, mustFinishInSpan = True)
>>> len(out3)
0
>>> out4 = st1.getElementsByOffset(1, 2)
>>> len(out4)
1
>>> out4[0].step
'D'
>>> out5 = st1.getElementsByOffset(1, 2, includeEndBoundary = False)
>>> len(out5)
0
>>> out6 = st1.getElementsByOffset(1, 2, includeEndBoundary = False, mustBeginInSpan = False)
>>> len(out6)
1
>>> out6[0].step
'C'
>>> out7 = st1.getElementsByOffset(1, 3, mustBeginInSpan = False)
>>> len(out7)
2
>>> [el.step for el in out7]
['C', 'D']
>>> a = Stream()
>>> n = note.Note('G')
>>> n.quarterLength = .5
>>> a.repeatInsert(n, range(8))
>>> b = Stream()
>>> b.repeatInsert(a, [0, 3, 6])
>>> c = b.getElementsByOffset(2,6.9)
>>> len(c)
2
>>> c = b.flat.getElementsByOffset(2,6.9)
>>> len(c)
10
getElementsNotOfClass(classFilterList)

Return a list of all Elements that do not match the one or more classes in the classFilterList. A single class can be provided to the classFilterList parameter.

>>> a = Stream()
>>> a.repeatInsert(note.Rest(), range(10))
>>> for x in range(4):
...     n = note.Note('G#')
...     n.offset = x * 3
...     a.insert(n)
>>> found = a.getElementsNotOfClass(note.Note)
>>> len(found)
10
>>> b = Stream()
>>> b.repeatInsert(note.Rest(), range(15))
>>> a.insert(b)
>>> # here, it gets elements from within a stream
>>> # this probably should not do this, as it is one layer lower
>>> found = a.flat.getElementsNotOfClass(note.Rest)
>>> len(found)
4
>>> found = a.flat.getElementsNotOfClass(note.Note)
>>> len(found)
25
getInstrument(searchActiveSite=True, returnDefault=True)

Search this stream or activeSite streams for Instrument objects, otherwise return a default

>>> a = Stream()
>>> b = a.getInstrument() # a default will be returned
getKeySignatures(searchActiveSite=True, searchContext=True)

Collect all KeySignature objects in this Stream in a new Stream. Optionally search the activeSite stream and/or contexts. If no KeySignature objects are defined, returns an empty Stream

>>> from music21 import clef
>>> a = Stream()
>>> b = key.KeySignature(3)
>>> a.insert(0, b)
>>> a.repeatInsert(note.Note("C#"), range(10))
>>> c = a.getKeySignatures()
>>> len(c) == 1
True
getOffsetByElement(obj)

Given an object, return the offset of that object in the context of this Stream. This method can be called on a flat representation to return the ultimate position of a nested structure.

>>> n1 = note.Note('A')
>>> n2 = note.Note('B')
>>> s1 = Stream()
>>> s1.insert(10, n1)
>>> s1.insert(100, n2)
>>> s2 = Stream()
>>> s2.insert(10, s1)
>>> s2.flat.getOffsetBySite(n1) # this will not work
Traceback (most recent call last):
DefinedContextsException: ...
>>> s2.flat.getOffsetByElement(n1)
20.0
>>> s2.flat.getOffsetByElement(n2)
110.0
getOverlaps(includeDurationless=True, includeEndBoundary=False)

Find any elements that overlap. Overlaping might include elements that have no duration but that are simultaneous. Whether elements with None durations are included is determined by includeDurationless. This method returns a dictionary, where keys are the start time of the first overlap and value are a list of all objects included in that overlap group. This example demonstrates end-joing overlaps: there are four quarter notes each following each other. Whether or not these count as overlaps is determined by the includeEndBoundary parameter.

>>> a = Stream()
>>> for x in range(4):
...     n = note.Note('G#')
...     n.duration = duration.Duration('quarter')
...     n.offset = x * 1
...     a.insert(n)
...
>>> d = a.getOverlaps(True, False)
>>> len(d)
0
>>> d = a.getOverlaps(True, True) # including coincident boundaries
>>> len(d)
1
>>> len(d[0])
4
>>> a = Stream()
>>> for x in [0,0,0,0,13,13,13]:
...     n = note.Note('G#')
...     n.duration = duration.Duration('half')
...     n.offset = x
...     a.insert(n)
...
>>> d = a.getOverlaps()
>>> len(d[0])
4
>>> len(d[13])
3
>>> a = Stream()
>>> for x in [0,0,0,0,3,3,3]:
...     n = note.Note('G#')
...     n.duration = duration.Duration('whole')
...     n.offset = x
...     a.insert(n)
...
>>> # default is to not include coincident boundaries
>>> d = a.getOverlaps()
>>> len(d[0])
7
getSimultaneous(includeDurationless=True)

Find and return any elements that start at the same time.

>>> stream1 = Stream()
>>> for x in range(4):
...     n = note.Note('G#')
...     n.offset = x * 0
...     stream1.insert(n)
...
>>> b = stream1.getSimultaneous()
>>> len(b[0]) == 4
True
>>> stream2 = Stream()
>>> for x in range(4):
...     n = note.Note('G#')
...     n.offset = x * 3
...     stream2.insert(n)
...
>>> d = stream2.getSimultaneous()
>>> len(d) == 0
True
getTimeSignatures(searchContext=True, returnDefault=True, sortByCreationTime=True)

Collect all TimeSignature objects in this stream. If no TimeSignature objects are defined, get a default

>>> a = Stream()
>>> b = meter.TimeSignature('3/4')
>>> a.insert(b)
>>> a.repeatInsert(note.Note("C#"), range(10))
>>> c = a.getTimeSignatures()
>>> len(c) == 1
True
groupCount()

Get a dictionary for each groupId and the count of instances.

>>> a = Stream()
>>> n = note.Note()
>>> a.repeatAppend(n, 30)
>>> a.addGroupForElements('P1')
>>> a.groupCount()
{'P1': 30}
>>> a[12].groups.append('green')
>>> a.groupCount()
{'P1': 30, 'green': 1}
groupElementsByOffset(returnDict=False)
returns a List of lists in which each entry in the main list is a list of elements occurring at the same time. list is ordered by offset (since we need to sort the list anyhow in order to group the elements), so there is no need to call stream.sorted before running this, but it can’t hurt. it is DEFINITELY a feature that this method does not find elements within substreams that have the same absolute offset. See Score.lily for how this is useful. For the other behavior, call Stream.flat first.
hasElement(obj)

Return True if an element, provided as an argument, is contained in this Stream.

>>> from music21 import *
>>> s = stream.Stream()
>>> n1 = note.Note('g')
>>> n2 = note.Note('g#')
>>> s.append(n1)
>>> s.hasElement(n1)
True
hasMeasures()
Return a boolean value showing if this Stream contains Measures
hasPartLikeStreams()
Return a boolean value showing if this Stream contains multiple Parts, or Part-like sub-Streams.
hasVoices()
Return a boolean value showing if this Stream contains Voices
index(obj)

Return the first matched index for the specified object.

>>> from music21 import *
>>> a = stream.Stream()
>>> fSharp = note.Note("F#")
>>> a.repeatInsert(note.Note("A#"), range(10))
>>> a.append(fSharp)
>>> a.index(fSharp)
10
indexList(obj, firstMatchOnly=False)

Return a list of one or more index values where the supplied object is found on this Stream’s elements list. To just return the first matched index, set firstMatchOnly to True. The obj parameter may be an object or an id of an object. No matches are found, an empty list is returned. Matching is based exclusively on id() of objects.

>>> from music21 import *
>>> s = stream.Stream()
>>> n1 = note.Note('g')
>>> n2 = note.Note('g#')
>>> s.insert(0, n1)
>>> s.insert(5, n2)
>>> len(s)
2
>>> s.indexList(n1)
[0]
>>> s.indexList(n2)
[1]
insertAtNativeOffset(item)

Inserts an item at the offset that was defined before the item was inserted into a Stream. That is item.getOffsetBySite(None); in fact, the entire code is self.insert(item.getOffsetBySite(None), item)

>>> n1 = note.Note("F-")
>>> n1.offset = 20.0
>>> stream1 = Stream()
>>> stream1.append(n1)
>>> n1.getOffsetBySite(stream1)
0.0
>>> n1.offset
0.0
>>> stream2 = Stream()
>>> stream2.insertAtNativeOffset(n1)
>>> stream2[0].offset
20.0
>>> n1.getOffsetBySite(stream2)
20.0
internalize(container=None, classFilterList=[, 'GeneralNote', 'Rest', 'Chord'])
Gather all notes and related classes of this Stream and place inside a new container (like a Voice) in this Stream.
invertDiatonic(inversionNote=<music21.note.Note C>, inPlace=True)

inverts a stream diatonically around the given note (by default, middle C) For pieces where the key signature does not change throughout the piece it is MUCH faster than for pieces where the key signature changes. Here in this test, we put Ciconia’s Quod Jactatur (a single voice piece that should have a canon solution: see trecento.quodJactatur) into 3 flats (instead of its original 1 flat) in measure 1, but into 5 sharps in measure 2 and then invert around F4, creating a new piece.

>>> from music21 import *
>>> qj = corpus.parseWork('ciconia/quod_jactatur').parts[0]
>>> qj.measures(1,2).show('text')
{0.0} <music21.stream.Measure 1 offset=0.0>
{0.0} <music21.clef.Treble8vbClef object at 0x...>
{0.0} <music21.instrument.Instrument P1: MusicXML Part: Grand Piano>
{0.0} <music21.key.KeySignature of 1 flat>
{0.0} <music21.meter.TimeSignature 2/4>
{0.0} <music21.layout.SystemLayout object at 0x...>
{0.0} <music21.note.Note C>
{1.5} <music21.note.Note D>
{2.0} <music21.stream.Measure 2 offset=2.0>
{0.0} <music21.note.Note E>
{0.5} <music21.note.Note D>
{1.0} <music21.note.Note C>
{1.5} <music21.note.Note D>
>>> k1 = qj.flat.getElementsByClass(key.KeySignature)[0]
>>> qj.flat.replace(k1, key.KeySignature(-3))
>>> qj.getElementsByClass(stream.Measure)[1].insert(0, key.KeySignature(5))
>>> qj2 = qj.invertDiatonic(note.Note('F4'), inPlace = False)
>>> qj2.measures(1,2).show('text')
{0.0} <music21.stream.Measure 1 offset=0.0>
{0.0} <music21.clef.Treble8vbClef object at 0x...>
{0.0} <music21.instrument.Instrument P1: MusicXML Part: Grand Piano>
{0.0} <music21.key.KeySignature of 3 flats>
{0.0} <music21.meter.TimeSignature 2/4>
{0.0} <music21.layout.SystemLayout object at 0x...>
{0.0} <music21.note.Note B->
{1.5} <music21.note.Note A->
{2.0} <music21.stream.Measure 2 offset=2.0>
{2.0} <music21.key.KeySignature of 5 sharps>
{2.0} <music21.note.Note G#>
{2.5} <music21.note.Note A#>
{3.0} <music21.note.Note B>
{3.5} <music21.note.Note A#>
isSequence(includeDurationless=True, includeEndBoundary=False)

A stream is a sequence if it has no overlaps.

>>> from music21 import *
>>> a = stream.Stream()
>>> for x in [0,0,0,0,3,3,3]:
...     n = note.Note('G#')
...     n.duration = duration.Duration('whole')
...     n.offset = x * 1
...     a.insert(n)
...
>>> a.isSequence()
False
makeAccidentals(pitchPast=None, useKeySignature=True, alteredPitches=None, searchKeySignatureByContext=False, cautionaryPitchClass=True, cautionaryAll=False, inPlace=True, overrideStatus=False, cautionaryNotImmediateRepeat=True)
A method to set and provide accidentals given varous conditions and contexts. If useKeySignature is True, a KeySignature will be searched for in this Stream or this Stream’s defined contexts. An alternative KeySignature can be supplied with this object and used for temporary pitch processing. If alteredPitches is a list of modified pitches (Pitches with Accidentals) that can be directly supplied to Accidental processing. These are the same values obtained from a music21.key.KeySignature object using the alteredPitches property. If cautionaryPitchClass is True, comparisons to past accidentals are made regardless of register. That is, if a past sharp is found two octaves above a present natural, a natural sign is still displayed. If cautionaryAll is True, all accidentals are shown. If overrideStatus is True, this method will ignore any current displayStatus stetting found on the Accidental. By default this does not happen. If displayStatus is set to None, the Accidental’s displayStatus is set. If cautionaryNotImmediateRepeat is True, cautionary accidentals will be displayed for an altered pitch even if that pitch had already been displayed as altered. The updateAccidentalDisplay() method is used to determine if an accidental is necessary. This will assume that the complete Stream is the context of evaluation. For smaller context ranges, call this on Measure objects. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.
makeBeams(inPlace=True)

Return a new measure with beams applied to all notes. In the process of making Beams, this method also updates tuplet types. This is destructive and thus changes an attribute of Durations in Notes. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.

>>> aMeasure = Measure()
>>> aMeasure.timeSignature = meter.TimeSignature('4/4')
>>> aNote = note.Note()
>>> aNote.quarterLength = .25
>>> aMeasure.repeatAppend(aNote,16)
>>> bMeasure = aMeasure.makeBeams()
makeChords(minimumWindowSize=0.125, includePostWindow=True, removeRedundantPitches=True, gatherArticulations=True, gatherExpressions=True, inPlace=False)
Gather simultaneous Notes into a Chords. The gathering of elements, starting from offset 0.0, uses the minimumWindowSize, in quarter lengths, to collect all Notes that start between 0.0 and the minimum window size (this permits overlaps within a minimum tolerance). After collection, the maximum duration of collected elements is found; this duration is then used to set the new starting offset. A possible gap then results between the end of the window and offset specified by the maximum duration; these additional notes are gathered in a second pass if includePostWindow is True. The new start offset is shifted to the larger of either the minimum window or the maximum duration found in the collected group. The process is repeated until all offsets are covered. Each collection of Notes is formed into a Chord. The Chord is given the longest duration of all constituents, and is inserted at the start offset of the window from which it was gathered. Chords can gather both articulations and expressions from found Notes using gatherArticulations and gatherExpressions. The resulting Stream, if not in-place, can also gather additional objects by placing class names in the collect list. By default, TimeSignature and KeySignature objects are collected.
makeMeasures(meterStream=None, refStreamOrTimeRange=None, inPlace=False)

Take a stream and partition all elements into measures based on one or more TimeSignature defined within the stream. If no TimeSignatures are defined, a default is used. This always creates a new stream with Measures, though objects are not copied from self stream. If meterStream is provided, this is used to establish a sequence of TimeSignature objects, instead of any found in the Stream. Alternatively, a TimeSignature object can be provided. If refStreamOrTimeRange is provided, this is used to provide minimum and maximum offset values, necessary to fill empty rests and similar. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.

A simple example: a single measure of 4/4 is created by adding three quarter rests to a stream:

>>> from music21 import *
>>> sSrc = stream.Stream()
>>> sSrc.repeatAppend(note.Rest(), 3)
>>> sMeasures = sSrc.makeMeasures()
>>> len(sMeasures.getElementsByClass('Measure'))
1
>>> sMeasures[0].timeSignature
<music21.meter.TimeSignature 4/4>
>>> sSrc.insert(0.0, meter.TimeSignature('3/4'))
>>> sMeasures = sSrc.makeMeasures()
>>> sMeasures[0].timeSignature
<music21.meter.TimeSignature 3/4>

10 quarter notes are added to a stream, along with 10 more quarter notes on the upbeat. After makeMeasures is called, 3 measures of 4/4 are created:

>>> sSrc = stream.Part()
>>> n = note.Note()
>>> n.quarterLength = 1
>>> sSrc.repeatAppend(n, 10)
>>> sSrc.repeatInsert(n, [x+.5 for x in range(10)])
>>> sMeasures = sSrc.makeMeasures()
>>> len(sMeasures.getElementsByClass('Measure'))
3
>>> sMeasures.__class__.__name__
'Part'
>>> sMeasures[0].timeSignature
<music21.meter.TimeSignature 4/4>

If after running makeMeasures you run makeTies, it will also split long notes into smaller notes with ties. Lyrics and articulations are attached to the first note. Expressions (fermatas, etc.) will soon be attached to the last note but are NOT YET: >>> p1 = stream.Part() >>> p1.append(meter.TimeSignature(‘3/4’)) >>> longNote = note.Note(“D#4”) >>> longNote.quarterLength = 7.5 >>> longNote.articulations = [articulations.Staccato()] >>> longNote.lyric = “hello” >>> p1.append(longNote) >>> partWithMeasures = p1.makeMeasures() >>> dummy = partWithMeasures.makeTies(inPlace = True) >>> partWithMeasures.show(‘text’) {0.0} <music21.stream.Measure 1 offset=0.0> {0.0} <music21.meter.TimeSignature 3/4> {0.0} <music21.clef.TrebleClef object at 0x...> {0.0} <music21.meter.TimeSignature 3/4> {0.0} <music21.note.Note D#> {3.0} <music21.stream.Measure 2 offset=3.0> {0.0} <music21.note.Note D#> {6.0} <music21.stream.Measure 3 offset=6.0> {0.0} <music21.note.Note D#> >>> allNotes = partWithMeasures.flat.notes >>> [allNotes[0].articulations, allNotes[1].articulations, allNotes[2].articulations] [[<music21.articulations.Staccato>], [], []] >>> [allNotes[0].lyric, allNotes[1].lyric, allNotes[2].lyric] [‘hello’, None, None]

makeNotation(meterStream=None, refStreamOrTimeRange=None, inPlace=False)

This method calls a sequence of Stream methods on this Stream to prepare notation, including creating Measures if necessary, creating ties, beams, and accidentals. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.

>>> from music21 import stream, note
>>> s = stream.Stream()
>>> n = note.Note('g')
>>> n.quarterLength = 1.5
>>> s.repeatAppend(n, 10)
>>> sMeasures = s.makeNotation()
>>> len(sMeasures.getElementsByClass('Measure'))
4
makeRests(refStreamOrTimeRange=None, fillGaps=False, inPlace=True)

Given a Stream with an offset not equal to zero, fill with one Rest preeceding this offset. If refStreamOrTimeRange is provided as a Stream, this Stream is used to get min and max offsets. If a list is provided, the list assumed to provide minimum and maximum offsets. Rests will be added to fill all time defined within refStream. If fillGaps is True, this will create rests in any time regions that have no active elements. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deepcopy.

>>> a = Stream()
>>> a.insert(20, note.Note())
>>> len(a)
1
>>> a.lowestOffset
20.0
>>> b = a.makeRests()
>>> len(b)
2
>>> b.lowestOffset
0.0
makeTies(meterStream=None, inPlace=True, displayTiedAccidentals=False)

Given a stream containing measures, examine each element in the Stream. If the elements duration extends beyond the measures boundary, create a tied entity, placing the split Note in the next Measure. Note that his method assumes that there is appropriate space in the next Measure: this will not shift Note, but instead allocate them evenly over barlines. Generall, makeMeasures is called prior to calling this method. If inPlace is True, this is done in-place; if inPlace is False, this returns a modified deep copy.

>>> d = Stream()
>>> n = note.Note()
>>> n.quarterLength = 12
>>> d.repeatAppend(n, 10)
>>> d.repeatInsert(n, [x+.5 for x in range(10)])
>>> x = d.makeMeasures()
>>> x = x.makeTies()
makeTupletBrackets(inPlace=True)
Given a Stream of mixed durations, the first and last tuplet of any group of tuplets must be designated as the start and end. Need to not only look at Notes, but components within Notes, as these might contain additional tuplets.
measure(measureNumber, collect=[, <class 'music21.clef.Clef'>, <class 'music21.meter.TimeSignature'>, <class 'music21.instrument.Instrument'>, <class 'music21.key.KeySignature'>])

Given a measure number, return a single Measure object if the Measure number exists, otherwise return None. This method is distinguished from measures() in that this method returns a single Measure object, not a Stream containing one or more Measure objects.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> a[0].measure(3)
<music21.stream.Measure 3 offset=0.0>
measureOffsetMap(classFilterList=None)

If this Stream contains Measures, provide a dictionary where keys are offsets and values are a list of references to one or more Measures that start at that offset. The offset values is always in the frame of the calling Stream (self). The classFilterList argument can be a list of classes used to find Measures. A default of None uses Measure.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> sorted(a[0].measureOffsetMap().keys())
[0.0, 4.0, 8.0, 12.0, 16.0, 20.0, 24.0, 34.0, 38.0]
measures(numberStart, numberEnd, collect=[, <class 'music21.clef.Clef'>, <class 'music21.meter.TimeSignature'>, <class 'music21.instrument.Instrument'>, <class 'music21.key.KeySignature'>], gatherSpanners=True)

Get a region of Measures based on a start and end Measure number, were the boundary numbers are both included. That is, a request for measures 4 through 10 will return 7 Measures, numbers 4 through 10. Additionally, any number of associated classes can be gathered as well. Associated classes are the last found class relevant to this Stream or Part. While all elements in the source are made available in the extracted region, new Measure objects are created and returned.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> b = a[0].measures(4,6)
>>> len(b)
3
melodicIntervals(*skipArgs, **skipKeywords)
Returns a Stream of Interval objects between Notes (and by default, Chords) that follow each other in a stream. the offset of the Interval is the offset of the beginning of the interval (if two notes are adjacent, then this offset is equal to the offset of the second note) See Stream.findConsecutiveNotes for a discussion of what consecutive notes mean, and which keywords are allowed. The interval between a Note and a Chord (or between two chords) is the interval between pitches[0]. For more complex interval calculations, run findConsecutiveNotes and then use notesToInterval. Returns None of there are not at least two elements found by findConsecutiveNotes. See Test.testMelodicIntervals() for usage details.
mergeElements(other, classFilterList=[])

Given another Stream, store references of each element in the other Stream in this Stream. This does not make copies of any elements, but simply stores all of them in this Stream. Optionally, provide a list of classes to exclude with the classFilter list. This method provides functionality like a shallow copy, but manages locations properly, only copies elements, and permits filtering by class type.

>>> from music21 import *
>>> s1 = stream.Stream()
>>> s2 = stream.Stream()
>>> n1 = note.Note('f#')
>>> n2 = note.Note('g')
>>> s1.append(n1)
>>> s1.append(n2)
>>> s2.mergeElements(s1)
>>> len(s2)
2
>>> s1[0] is s2[0]
True
>>> s1[1] is s2[1]
True
pitchAttributeCount(pitchAttr=name)

Return a dictionary of pitch class usage (count) by selecting an attribute of the Pitch object.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> a.pitchAttributeCount('pitchClass')
{0: 3, 2: 25, 3: 3, 4: 14, 6: 15, 7: 13, 9: 17, 11: 14}
>>> a.pitchAttributeCount('name')
{u'A': 17, u'C': 3, u'B': 14, u'E': 14, u'D': 25, u'G': 13, u'D#': 3, u'F#': 15}
>>> a.pitchAttributeCount('nameWithOctave')
{u'E3': 4, u'G4': 2, u'F#4': 2, u'A2': 2, u'E2': 1, u'G2': 1, u'D3': 9, u'D#3': 1, u'B4': 7, u'A3': 5, u'F#3': 13, u'A4': 10, u'B2': 3, u'B3': 4, u'C3': 2, u'E4': 9, u'D4': 14, u'D5': 2, u'D#4': 2, u'C5': 1, u'G3': 10}
playingWhenAttacked(el, elStream=None)

Given an element (from another Stream) returns the single element in this Stream that is sounding while the given element starts. If there are multiple elements sounding at the moment it is attacked, the method returns the first element of the same class as this element, if any. If no element is of the same class, then the first element encountered is returned. For more complex usages, use allPlayingWhileSounding. Returns None if no elements fit the bill. The optional elStream is the stream in which el is found. If provided, el’s offset in that Stream is used. Otherwise, the current offset in el is used. It is just in case you are paranoid that el.offset might not be what you want.

>>> n1 = note.Note("G#")
>>> n2 = note.Note("D#")
>>> s1 = Stream()
>>> s1.insert(20.0, n1)
>>> s1.insert(21.0, n2)
>>> n3 = note.Note("C#")
>>> s2 = Stream()
>>> s2.insert(20.0, n3)
>>> s1.playingWhenAttacked(n3).name
'G#'
>>> n3._definedContexts.setOffsetBySite(s2, 20.5)
>>> s1.playingWhenAttacked(n3).name
'G#'
>>> n3._definedContexts.setOffsetBySite(s2, 21.0)
>>> n3.offset
21.0
>>> s1.playingWhenAttacked(n3).name
'D#'
# optionally, specify the site to get the offset from
>>> n3._definedContexts.setOffsetBySite(None, 100)
>>> n3.activeSite = None
>>> s1.playingWhenAttacked(n3)
<BLANKLINE>
>>> s1.playingWhenAttacked(n3, s2).name
'D#'
plot(*args, **keywords)

Given a method and keyword configuration arguments, create and display a plot. Note: plot() requires matplotib to be installed. For details on arguments, see plotStream(). Available plots include the following Plot classes: PlotHistogramPitchSpace PlotHistogramPitchClass PlotHistogramQuarterLength PlotScatterPitchSpaceQuarterLength PlotScatterPitchClassQuarterLength PlotScatterPitchClassOffset PlotScatterPitchSpaceDynamicSymbol PlotHorizontalBarPitchSpaceOffset PlotHorizontalBarPitchClassOffset PlotScatterWeightedPitchSpaceQuarterLength PlotScatterWeightedPitchClassQuarterLength PlotScatterWeightedPitchSpaceDynamicSymbol Plot3DBarsPitchSpaceQuarterLength PlotWindowedKrumhanslSchmuckler PlotWindowedAmbitus

>>> from music21 import *
>>> s = corpus.parseWork('bach/bwv57.8')
>>> s.plot('pianoroll')
_images/PlotHorizontalBarPitchSpaceOffset.png
pop(index)

Return and remove the object found at the user-specified index value. Index values are those found in elements and are not necessary offset order.

>>> from music21 import *
>>> a = stream.Stream()
>>> a.repeatInsert(note.Note("C"), range(10))
>>> junk = a.pop(0)
>>> len(a)
9
quantize(quarterLengthDivisors=[, 4, 3], processOffsets=True, processDurations=False)

Quantize time values in this Stream by snapping offsets and/or durations to the nearest multiple of a quarter length value given as one or more divisors of 1 quarter length. The quantized value found closest to a divisor multiple will be used. The quarterLengthDivisors provides a flexible way to provide quantization settings. For example, [2] will snap all events to eighth note grid. [4, 3] will snap events to sixteenth notes and eighth note triplets, whichever is closer. [4, 6] will snap events to sixteenth notes and sixteenth note triplets.

>>> from music21 import *
>>> n = note.Note()
>>> n.quarterLength = .49
>>> s = stream.Stream()
>>> s.repeatInsert(n, [0.1, .49, .9, 1.51])
>>> s.quantize([4], processOffsets=True, processDurations=True)
>>> [e.offset for e in s]
[0.0, 0.5, 1.0, 1.5]
>>> [e.duration.quarterLength for e in s]
[0.5, 0.5, 0.5, 0.5]
remove(target, firstMatchOnly=True)

Remove an object from this Stream. Additionally, this Stream is removed from the object’s sites in DefinedContexts. By default, only the first match is removed. This can be adjusted with the firstMatchOnly parameters.

>>> from music21 import *
>>> s = stream.Stream()
>>> n1 = note.Note('g')
>>> n2 = note.Note('g#')
>>> # copies of an object are not the same as the object
>>> n3 = copy.deepcopy(n2)
>>> s.insert(10, n1)
>>> s.insert(5, n2)
>>> s.remove(n1)
>>> len(s)
1
>>> s.insert(20, n3)
>>> s.remove(n3)
>>> [e for e in s] == [n2]
True
repeatAppend(item, numberOfTimes)

Given an object and a number, run append that many times on a deepcopy of the object. numberOfTimes should of course be a positive integer.

>>> a = Stream()
>>> n = note.Note()
>>> n.duration.type = "whole"
>>> a.repeatAppend(n, 10)
>>> a.duration.quarterLength
40.0
>>> a[9].offset
36.0
repeatInsert(item, offsets)

Given an object, create a deep copy of each object at each positions specified by the offset list:

>>> a = Stream()
>>> n = note.Note('G-')
>>> n.quarterLength = 1
>>> a.repeatInsert(n, [0, 2, 3, 4, 4.5, 5, 6, 7, 8, 9, 10, 11, 12])
>>> len(a)
13
>>> a[10].offset
10.0
replace(target, replacement, firstMatchOnly=False, allTargetSites=True)

Given a target object, replace all references of that object with references to the supplied replacement object.

If allTargetSites is True (as it is by default), all sites that have a reference for the replacement will be similarly changed. This is useful for altering both a flat and nested representation.

setupPickleScaffold()

Prepare this stream and all of its contents for pickling.

>>> a = Stream()
>>> n = note.Note()
>>> n.duration.type = "whole"
>>> a.repeatAppend(n, 10)
>>> a.setupPickleScaffold()
shiftElements(offset, classFilterList=None)

Add offset value to every offset of contained Elements. Elements that are stored on the _endElements list will not be changed

>>> a = Stream()
>>> a.repeatInsert(note.Note("C"), range(0,10))
>>> a.shiftElements(30)
>>> a.lowestOffset
30.0
>>> a.shiftElements(-10)
>>> a.lowestOffset
20.0
simultaneousAttacks(stream2)

returns an ordered list of offsets where elements are started (attacked) in both stream1 and stream2.

>>> st1 = Stream()
>>> st2 = Stream()
>>> n11 = note.Note()
>>> n12 = note.Note()
>>> n21 = note.Note()
>>> n22 = note.Note()
>>> st1.insert(10, n11)
>>> st2.insert(10, n21)
>>> st1.insert(20, n12)
>>> st2.insert(20.5, n22)
>>> simultaneous = st1.simultaneousAttacks(st2)
>>> simultaneous
[10.0]
sliceAtOffsets(offsetList, target=None, addTies=True, inPlace=False, displayTiedAccidentals=False)

Given a list of quarter lengths, slice and optionally tie all Durations at these points.

>>> from music21 import *
>>> s = stream.Stream()
>>> n = note.Note()
>>> n.quarterLength = 4
>>> s.append(n)
>>> post = s.sliceAtOffsets([1, 2, 3], inPlace=True)
>>> [(e.offset, e.quarterLength) for e in s]
[(0.0, 1.0), (1.0, 1.0), (2.0, 1.0), (3.0, 1.0)]
sliceByBeat(target=None, addTies=True, inPlace=False, displayTiedAccidentals=False)
Slice all elements in the Stream that have a Duration at the offsets determined to be the beat from the local TimeSignature.
sliceByGreatestDivisor(addTies=True, inPlace=False)
Slice all Duration objects on all Notes of this Stream. Duration are sliced according to the approximate GCD found in all durations.
sliceByQuarterLengths(quarterLengthList, target=None, addTies=True, inPlace=False)
Slice all Duration objects on all Notes of this Stream. Duration are sliced according to values provided in quarterLengthList list. If the sum of these values is less than the Duration, the values are accumulated in a loop to try to fill the Duration. If a match cannot be found, an Exception is raised. If target == None, the entire Stream is processed. Otherwise, only the element specified is manipulated.
sort()

Sort this Stream in place by offset, then priority, then standard class sort order (e.g., Clefs before KeySignatures before TimeSignatures). Note that Streams automatically sort themsevlves unless autoSort is set to False (as in the example below)

>>> from music21 import *
>>> n1 = note.Note('a')
>>> n2 = note.Note('b')
>>> s = stream.Stream()
>>> s.autoSort = False
>>> s.insert(100, n2)
>>> s.insert(0, n1) # now a has a lower offset by higher index
>>> [n.name for n in s]
['B', 'A']
>>> s.sort()
>>> [n.name for n in s]
['A', 'B']
splitByClass(objName, fx)

Given a stream, get all objects specified by objName and then form two new streams. Fx should be a lambda or other function on elements. All elements where fx returns True go in the first stream. All other elements are put in the second stream.

>>> stream1 = Stream()
>>> for x in range(30,81):
...     n = note.Note()
...     n.offset = x
...     n.midi = x
...     stream1.insert(n)
>>> fx = lambda n: n.midi > 60
>>> b, c = stream1.splitByClass(note.Note, fx)
>>> len(b)
20
>>> len(c)
31
storeAtEnd(itemOrList, ignoreSort=False)
Inserts an item or items at the end of the Stream, stored in the special _endElements As sorting is only by priority and class, cannot avoid setting isSorted to False.
stripTies(inPlace=False, matchByPitch=False, retainContainers=False)

Find all notes that are tied; remove all tied notes, then make the first of the tied notes have a duration equal to that of all tied constituents. Lastly, remove the formerly-tied notes. This method can be used on Stream and Stream subclasses. When used on a Score, Parts and Measures are retained. If retainContainers is False (by default), this method only returns Note objects; Measures and other structures are stripped from the Stream. Set retainContainers to True to remove ties from a Part Stream that contains Measure Streams, and get back a multi-Measure structure. Presently, this only works if tied notes are sequentual; ultimately this will need to look at .to and .from attributes (if they exist) In some cases (under makeMeasures()) a continuation note will not have a Tie object with a stop attribute set. In that case, we need to look for sequential notes with matching pitches. The matchByPitch option can be used to use this technique.

>>> a = Stream()
>>> n = note.Note()
>>> n.quarterLength = 6
>>> a.append(n)
>>> m = a.makeMeasures()
>>> m = m.makeTies()
>>> len(m.flat.notes)
2
>>> m = m.stripTies()
>>> len(m.flat.notes)
1
>>>
teardownPickleScaffold()

After rebuilding this stream from pickled storage, prepare this as a normal Stream.

>>> a = Stream()
>>> n = note.Note()
>>> n.duration.type = "whole"
>>> a.repeatAppend(n, 10)
>>> a.setupPickleScaffold()
>>> a.teardownPickleScaffold()
transferOffsetToElements()

Transfer the offset of this stream to all internal elements; then set the offset of this stream to zero.

>>> a = Stream()
>>> a.repeatInsert(note.Note("C"), range(0,10))
>>> a.offset = 30
>>> a.transferOffsetToElements()
>>> a.lowestOffset
30.0
>>> a.offset
0.0
>>> a.offset = 20
>>> a.transferOffsetToElements()
>>> a.lowestOffset
50.0
trimPlayingWhileSounding(el, elStream=None, requireClass=False, padStream=False)
Returns a Stream of deepcopies of elements in otherStream that sound at the same time as`el. but with any element that was sounding when el. begins trimmed to begin with el. and any element sounding when el ends trimmed to end with el. if padStream is set to true then empty space at the beginning and end is filled with a generic Music21Object, so that no matter what otherStream is the same length as el. Otherwise is the same as allPlayingWhileSounding – but because these elements are deepcopies, the difference might bite you if you’re not careful. Note that you can make el an empty stream of offset X and duration Y to extract exactly that much information from otherStream.
voicesToParts()
If this Stream defines one or more voices, extract each into a Part, returning a Score. If this Stream has no voice, return the Stream as a Part within a Score.

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), mergeAttributes(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), searchParentByAttr(), setContextAttr(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

Measure

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.Measure(*args, **keywords)

A representation of a Measure organized as a Stream. All properties of a Measure that are Music21 objects are found as part of the Stream’s elements.

Measure attributes

number
A number representing the displayed or shown Measure number as presented in a written Score.
timeSignatureIsNew
Boolean describing if the TimeSignature is different than the previous Measure.
layoutWidth
A suggestion for layout width, though most rendering systems do not support this designation. Use SystemLayout objects instead.
clefIsNew
Boolean describing if the Clef is different than the previous Measure.
keyIsNew
Boolean describing if KeySignature is different than the previous Measure.
numberSuffix
If a Measure number has a string annotation, such as “a” or similar, this string is stored here.

Attributes without Documentation: isMeasure, filled, paddingLeft, paddingRight

Attributes inherited from Stream: isFlat, autoSort, isSorted, flattenedRepresentationOf

Attributes inherited from Music21Object: classSortOrder, id, groups

Measure properties

barDuration
Return the bar duration, or the Duration specified by the TimeSignature. TimeSignature is found first within the Measure, or within a context based search.
clef
>>> a = Measure()
>>> a.clef = clef.TrebleClef()
>>> a.clef.sign  # clef is an element
'G'
keySignature
>>> a = Measure()
>>> a.keySignature = key.KeySignature(0)
>>> a.keySignature.sharps
0
leftBarline
Get or set the left barline, or the Barline object found at offset zero of the Measure.
musicxml
Provide a complete MusicXML: representation.
mx

Return a musicxml Measure, populated with notes, chords, rests and a musixcml Attributes, populated with time, meter, key, etc

>>> a = note.Note()
>>> a.quarterLength = 4
>>> b = Measure()
>>> b.insert(0, a)
>>> len(b)
1
>>> mxMeasure = b.mx
>>> len(mxMeasure)
1
rightBarline

Get or set the right barline, or the Barline object found at the offset equal to the bar duration.

>>> from music21 import *
>>> b = bar.Barline('light-heavy')
>>> m = stream.Measure()
>>> m.rightBarline = b
>>> m.rightBarline.style
'light-heavy'
timeSignature
>>> a = Measure()
>>> a.timeSignature = meter.TimeSignature('2/4')
>>> a.timeSignature.numerator, a.timeSignature.denominator
(2, 4)

Properties inherited from Stream: beat, beatDuration, beatStr, beatStrength, duration, elements, flat, highestOffset, highestTime, isGapless, lily, lowestOffset, metadata, midiFile, midiTracks, notes, offsetMap, pitches, semiFlat, sorted, spanners, voices

Properties inherited from Music21Object: activeSite, classes, measureNumberLocal, offset, priority

Properties inherited from JSONSerializer: json

Measure methods

addRepeat()
No documentation.
addTimeDependentDirection(time, direction)
No documentation.
barDurationProportion(barDuration=None)

Return a floating point value greater than 0 showing the proportion of the bar duration that is filled based on the highest time of all elements. 0.0 is empty, 1.0 is filled; 1.5 specifies of an overflow of half. Bar duration refers to the duration of the Measure as suggested by the TimeSignature. This value cannot be determined without a Time Signature. An already-obtained Duration object can be supplied with the barDuration optional argument.

>>> from music21 import *
>>> m = stream.Measure()
>>> m.timeSignature = meter.TimeSignature('3/4')
>>> n = note.Note()
>>> n.quarterLength = 1
>>> m.append(copy.deepcopy(n))
>>> m.barDurationProportion()
0.33333...
>>> m.append(copy.deepcopy(n))
>>> m.barDurationProportion()
0.66666...
>>> m.append(copy.deepcopy(n))
>>> m.barDurationProportion()
1.0
>>> m.append(copy.deepcopy(n))
>>> m.barDurationProportion()
1.33333...
bestTimeSignature()
Given a Measure with elements in it, get a TimeSignature that contains all elements. Note: this does not yet accommodate triplets.
measureNumberWithSuffix()
No documentation.
mergeAttributes(other)
Given another Measure, configure all non-element attributes of this Measure with the attributes of the other Measure. No elements will be changed or copied. This method is necessary because Measures, unlike some Streams, have attributes independent of any stored elements.
padAsAnacrusis()

Given an incompletely filled Measure, adjust the paddingLeft value to to represent contained events as shifted to fill the left-most duration of the bar. Calling this method will overwrite any previously set paddingLeft value, based on the current TimeSignature-derived barDuration attribute.

>>> from music21 import *
>>> m = stream.Measure()
>>> m.timeSignature = meter.TimeSignature('3/4')
>>> n = note.Note()
>>> n.quarterLength = 1
>>> m.append(copy.deepcopy(n))
>>> m.padAsAnacrusis()
>>> m.paddingLeft
2.0
>>> m.timeSignature = meter.TimeSignature('5/4')
>>> m.padAsAnacrusis()
>>> m.paddingLeft
4.0

Methods inherited from Stream: addGroupForElements(), allPlayingWhileSounding(), analyze(), append(), attachIntervalsBetweenStreams(), attributeCount(), augmentOrDiminish(), bestClef(), explode(), extendDuration(), externalize(), extractContext(), findConsecutiveNotes(), findGaps(), getClefs(), getElementAfterElement(), getElementAfterOffset(), getElementAtOrAfter(), getElementAtOrBefore(), getElementBeforeElement(), getElementBeforeOffset(), getElementById(), getElementByObjectId(), getElementsByClass(), getElementsByGroup(), getElementsByOffset(), getElementsNotOfClass(), getInstrument(), getKeySignatures(), getOffsetByElement(), getOverlaps(), getSimultaneous(), getTimeSignatures(), groupCount(), groupElementsByOffset(), hasElement(), hasMeasures(), hasPartLikeStreams(), hasVoices(), index(), indexList(), insert(), insertAndShift(), insertAtNativeOffset(), internalize(), invertDiatonic(), isSequence(), makeAccidentals(), makeBeams(), makeChords(), makeMeasures(), makeNotation(), makeRests(), makeTies(), makeTupletBrackets(), measure(), measureOffsetMap(), measures(), melodicIntervals(), mergeElements(), pitchAttributeCount(), playingWhenAttacked(), plot(), pop(), quantize(), remove(), repeatAppend(), repeatInsert(), replace(), scaleDurations(), scaleOffsets(), setupPickleScaffold(), shiftElements(), simultaneousAttacks(), sliceAtOffsets(), sliceByBeat(), sliceByGreatestDivisor(), sliceByQuarterLengths(), sort(), splitByClass(), storeAtEnd(), stripTies(), teardownPickleScaffold(), transferOffsetToElements(), transpose(), trimPlayingWhileSounding(), voicesToParts()

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), searchParentByAttr(), setContextAttr(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

Opus

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.Opus(*args, **keywords)

A Stream subclass for handling multi-work music encodings. Many ABC files, for example, define multiple works or parts within a single file.

Opus attributes

Attributes inherited from Stream: isMeasure, isFlat, autoSort, isSorted, flattenedRepresentationOf

Attributes inherited from Music21Object: classSortOrder, id, groups

Opus properties

scores

Return all Score objects in a Opus.

>>> from music21 import *

Properties inherited from Stream: notes, pitches, beat, beatDuration, beatStr, beatStrength, duration, elements, flat, highestOffset, highestTime, isGapless, lily, lowestOffset, metadata, midiFile, midiTracks, musicxml, mx, offsetMap, semiFlat, sorted, spanners, voices

Properties inherited from Music21Object: activeSite, classes, measureNumberLocal, offset, priority

Properties inherited from JSONSerializer: json

Opus methods

getNumbers()

Return a list of all numbers defined in this Opus.

>>> from music21 import *
>>> o = corpus.parseWork('josquin/ovenusbant')
>>> o.getNumbers()
['1', '2', '3']
getScoreByNumber(opusMatch)

Get Score objects from this Stream by number. Performs title search using the search() method, and returns the first result.

>>> from music21 import *
>>> o = corpus.parseWork('josquin/ovenusbant')
>>> o.getNumbers()
['1', '2', '3']
>>> s = o.getScoreByNumber(2)
>>> s.metadata.title
'O Venus bant'
>>> s.metadata.alternativeTitle
'Tenor'
getScoreByTitle(titleMatch)

Get Score objects from this Stream by a title. Performs title search using the search() method, and returns the first result.

>>> from music21 import *
>>> o = corpus.parseWork('essenFolksong/erk5')
>>> s = o.getScoreByTitle('Vrienden, kommt alle gaere')
>>> s = o.getScoreByTitle('(.*)kommt(.*)') # regular expression
>>> s.metadata.title
'Vrienden, kommt alle gaere'
mergeScores()

Some Opus object represent numerous scores that are individual parts of the same work. This method will treat each contained Score as a Part, merging and returning a single Score with merged Metadata.

>>> from music21 import *
>>> o = corpus.parseWork('josquin/milleRegrets')
>>> s = o.mergeScores()
>>> s.metadata.title
'Mille regrets'
>>> len(s.parts)
4
show(fmt=None, app=None)
Displays an object in a format provided by the fmt argument or, if not provided, the format set in the user’s Environment. This method overrides the behavior specified in Music21Object.
write(fmt=None, fp=None)
Displays an object in a format provided by the fmt argument or, if not provided, the format set in the user’s Environment. This method overrides the behavior specified in Music21Object.

Methods inherited from Stream: append(), insert(), insertAndShift(), transpose(), augmentOrDiminish(), scaleOffsets(), scaleDurations(), addGroupForElements(), allPlayingWhileSounding(), analyze(), attachIntervalsBetweenStreams(), attributeCount(), bestClef(), explode(), extendDuration(), externalize(), extractContext(), findConsecutiveNotes(), findGaps(), getClefs(), getElementAfterElement(), getElementAfterOffset(), getElementAtOrAfter(), getElementAtOrBefore(), getElementBeforeElement(), getElementBeforeOffset(), getElementById(), getElementByObjectId(), getElementsByClass(), getElementsByGroup(), getElementsByOffset(), getElementsNotOfClass(), getInstrument(), getKeySignatures(), getOffsetByElement(), getOverlaps(), getSimultaneous(), getTimeSignatures(), groupCount(), groupElementsByOffset(), hasElement(), hasMeasures(), hasPartLikeStreams(), hasVoices(), index(), indexList(), insertAtNativeOffset(), internalize(), invertDiatonic(), isSequence(), makeAccidentals(), makeBeams(), makeChords(), makeMeasures(), makeNotation(), makeRests(), makeTies(), makeTupletBrackets(), measure(), measureOffsetMap(), measures(), melodicIntervals(), mergeElements(), pitchAttributeCount(), playingWhenAttacked(), plot(), pop(), quantize(), remove(), repeatAppend(), repeatInsert(), replace(), setupPickleScaffold(), shiftElements(), simultaneousAttacks(), sliceAtOffsets(), sliceByBeat(), sliceByGreatestDivisor(), sliceByQuarterLengths(), sort(), splitByClass(), storeAtEnd(), stripTies(), teardownPickleScaffold(), transferOffsetToElements(), trimPlayingWhileSounding(), voicesToParts()

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), mergeAttributes(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), searchParentByAttr(), setContextAttr(), setOffsetBySite(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

Part

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.Part(*args, **keywords)
A Stream subclass for designating music that is considered a single part. May be enclosed in a staff (for instance, 2nd and 3rd trombone on a single staff), may enclose staves (piano treble and piano bass), or may not enclose or be enclosed by a staff (in which case, it assumes that this part fits on one staff and shares it with no other part

PartStaff

Inherits from: Part, Stream, Music21Object, JSONSerializer

class music21.stream.PartStaff(*args, **keywords)
A Part subclass for designating music that is represented on a single staff but may only be one of many staffs for a single part.

Score

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.Score(*args, **keywords)

A Stream subclass for handling multi-part music. Absolutely optional (the largest containing Stream in a piece could be a generic Stream, or a Part, or a Staff). And Scores can be embedded in other Scores (in fact, our original thought was to call this class a Fragment because of this possibility of continuous embedding), but we figure that many people will like calling the largest container a Score and that this will become a standard.

Score attributes

Attributes inherited from Stream: isMeasure, isFlat, autoSort, isSorted, flattenedRepresentationOf

Attributes inherited from Music21Object: classSortOrder, id, groups

Score properties

lily
returns the lily code for a score.
parts

Return all Part objects in a Score.

>>> from music21 import *
>>> s = corpus.parseWork('bach/bwv66.6')
>>> parts = s.parts
>>> len(parts)
4

Properties inherited from Stream: notes, pitches, beat, beatDuration, beatStr, beatStrength, duration, elements, flat, highestOffset, highestTime, isGapless, lowestOffset, metadata, midiFile, midiTracks, musicxml, mx, offsetMap, semiFlat, sorted, spanners, voices

Properties inherited from Music21Object: activeSite, classes, measureNumberLocal, offset, priority

Properties inherited from JSONSerializer: json

Score methods

chordify(addTies=True, displayTiedAccidentals=False)
Split all Durations in all parts, if multi-part, by all unique offsets. All simultaneous durations are then gathered into single chords.
implode()
Reduce a polyphonic work into one or more staves.
measure(measureNumber, collect=[, <class 'music21.clef.Clef'>, <class 'music21.meter.TimeSignature'>, <class 'music21.instrument.Instrument'>, <class 'music21.key.KeySignature'>], gatherSpanners=True)

Given a measure number, return a single Measure object if the Measure number exists, otherwise return None. This method override the measures() method on Stream. This creates a new Score stream that has the same measure range for all Parts.

>>> from music21 import corpus
>>> a = corpus.parseWork('bach/bwv324.xml')
>>> len(a.measure(3)[0]) # contains 1 measure
1
measureOffsetMap(classFilterList=None)
This method overrides the measureOffsetMap() method of Stream. This creates a map based on all contained Parts in this Score. Measures found in multiple Parts with the same offset will be appended to the same list. This does not assume that all Parts have measures with identical offsets.
measures(numberStart, numberEnd, collect=[, <class 'music21.clef.Clef'>, <class 'music21.meter.TimeSignature'>, <class 'music21.instrument.Instrument'>, <class 'music21.key.KeySignature'>], gatherSpanners=True)
This method override the measures() method on Stream. This creates a new Score stream that has the same measure range for all Parts.
partsToVoices(voiceAllocation=2, permitOneVoicePerPart=False)
Given a multi-part Score, return a new Score that combines parts into voices. The voiceAllocation parameter can be an integer: if so, this many parts will each be grouped into one part as voices The permitOneVoicePerPart parameter, if True, will encode a single voice inside a single Part, rather than leaving a single part alone.
sliceByGreatestDivisor(inPlace=True, addTies=True)
Slice all duration of all part by the minimum duration that can be summed to each concurrent duration. Overrides method defined on Stream.

Methods inherited from Stream: append(), insert(), insertAndShift(), transpose(), augmentOrDiminish(), scaleOffsets(), scaleDurations(), addGroupForElements(), allPlayingWhileSounding(), analyze(), attachIntervalsBetweenStreams(), attributeCount(), bestClef(), explode(), extendDuration(), externalize(), extractContext(), findConsecutiveNotes(), findGaps(), getClefs(), getElementAfterElement(), getElementAfterOffset(), getElementAtOrAfter(), getElementAtOrBefore(), getElementBeforeElement(), getElementBeforeOffset(), getElementById(), getElementByObjectId(), getElementsByClass(), getElementsByGroup(), getElementsByOffset(), getElementsNotOfClass(), getInstrument(), getKeySignatures(), getOffsetByElement(), getOverlaps(), getSimultaneous(), getTimeSignatures(), groupCount(), groupElementsByOffset(), hasElement(), hasMeasures(), hasPartLikeStreams(), hasVoices(), index(), indexList(), insertAtNativeOffset(), internalize(), invertDiatonic(), isSequence(), makeAccidentals(), makeBeams(), makeChords(), makeMeasures(), makeNotation(), makeRests(), makeTies(), makeTupletBrackets(), melodicIntervals(), mergeElements(), pitchAttributeCount(), playingWhenAttacked(), plot(), pop(), quantize(), remove(), repeatAppend(), repeatInsert(), replace(), setupPickleScaffold(), shiftElements(), simultaneousAttacks(), sliceAtOffsets(), sliceByBeat(), sliceByQuarterLengths(), sort(), splitByClass(), storeAtEnd(), stripTies(), teardownPickleScaffold(), transferOffsetToElements(), trimPlayingWhileSounding(), voicesToParts()

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), mergeAttributes(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), searchParentByAttr(), setContextAttr(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

SpannerStorage

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.SpannerStorage(*arguments, **keywords)

For advanced use. This Stream subclass is used inside of a Spanner object to provide object storage. This subclass name can be used to search an object’s DefinedContexts and find any and all locations that are SpannerStorage objects. A spannerParent keyword argument must be provided by the Spanner in creation.

SpannerStorage attributes

Attributes without Documentation: spannerParent

Attributes inherited from Stream: isMeasure, isFlat, autoSort, isSorted, flattenedRepresentationOf

Attributes inherited from Music21Object: classSortOrder, id, groups

SpannerStorage properties

Properties inherited from Stream: notes, pitches, beat, beatDuration, beatStr, beatStrength, duration, elements, flat, highestOffset, highestTime, isGapless, lily, lowestOffset, metadata, midiFile, midiTracks, musicxml, mx, offsetMap, semiFlat, sorted, spanners, voices

Properties inherited from Music21Object: activeSite, classes, measureNumberLocal, offset, priority

Properties inherited from JSONSerializer: json

SpannerStorage methods

Methods inherited from Stream: append(), insert(), insertAndShift(), transpose(), augmentOrDiminish(), scaleOffsets(), scaleDurations(), addGroupForElements(), allPlayingWhileSounding(), analyze(), attachIntervalsBetweenStreams(), attributeCount(), bestClef(), explode(), extendDuration(), externalize(), extractContext(), findConsecutiveNotes(), findGaps(), getClefs(), getElementAfterElement(), getElementAfterOffset(), getElementAtOrAfter(), getElementAtOrBefore(), getElementBeforeElement(), getElementBeforeOffset(), getElementById(), getElementByObjectId(), getElementsByClass(), getElementsByGroup(), getElementsByOffset(), getElementsNotOfClass(), getInstrument(), getKeySignatures(), getOffsetByElement(), getOverlaps(), getSimultaneous(), getTimeSignatures(), groupCount(), groupElementsByOffset(), hasElement(), hasMeasures(), hasPartLikeStreams(), hasVoices(), index(), indexList(), insertAtNativeOffset(), internalize(), invertDiatonic(), isSequence(), makeAccidentals(), makeBeams(), makeChords(), makeMeasures(), makeNotation(), makeRests(), makeTies(), makeTupletBrackets(), measure(), measureOffsetMap(), measures(), melodicIntervals(), mergeElements(), pitchAttributeCount(), playingWhenAttacked(), plot(), pop(), quantize(), remove(), repeatAppend(), repeatInsert(), replace(), setupPickleScaffold(), shiftElements(), simultaneousAttacks(), sliceAtOffsets(), sliceByBeat(), sliceByGreatestDivisor(), sliceByQuarterLengths(), sort(), splitByClass(), storeAtEnd(), stripTies(), teardownPickleScaffold(), transferOffsetToElements(), trimPlayingWhileSounding(), voicesToParts()

Methods inherited from Music21Object: addContext(), addLocation(), addLocationAndActiveSite(), freezeIds(), getAllContextsByClass(), getContextAttr(), getContextByClass(), getOffsetBySite(), getSiteIds(), getSites(), getSpannerSites(), hasContext(), mergeAttributes(), purgeLocations(), removeLocationBySite(), removeLocationBySiteId(), searchParentByAttr(), setContextAttr(), setOffsetBySite(), show(), splitAtDurations(), splitAtQuarterLength(), splitByQuarterLengths(), unfreezeIds(), unwrapWeakref(), wrapWeakref(), write()

Methods inherited from JSONSerializer: jsonAttributes(), jsonComponentFactory(), jsonPrint(), jsonRead(), jsonWrite()

StreamIterator

class music21.stream.StreamIterator(srcStream)

A simple Iterator object used to handle iteration of Streams and other list-like objects.

StreamIterator methods

next()
No documentation.

Voice

Inherits from: Stream, Music21Object, JSONSerializer

class music21.stream.Voice(givenElements=None, *args, **keywords)
A Stream subclass for declaring that all the music in the stream belongs to a certain “voice” for analysis or display purposes. Note that both Finale’s Layers and Voices as concepts are considered Voices here.