Methods and Classes useful in searching within scores.
For searching a group of scores see the search functions within moduleCorpusBase .
searches the list of otherStreams and returns an ordered list of matches (each stream will have a new property of matchProbability to show how well it matches)
>>> from music21 import *
>>> s = converter.parse("c4 d8 e16 FF a'4 b-", "4/4")
>>> o1 = converter.parse("c4 d8 e GG a' b-4", "4/4")
>>> o1.id = 'o1'
>>> o2 = converter.parse("d#2 f A a' G b", "4/4")
>>> o2.id = 'o2'
>>> o3 = converter.parse("c8 d16 e32 FF32 a'8 b-8", "4/4")
>>> o3.id = 'o3'
>>> l = search.approximateNoteSearch(s, [o1, o2, o3])
>>> for i in l:
... print i.id, i.matchProbability
o1 0.666666...
o3 0.333333...
o2 0.083333...
searches the list of otherStreams and returns an ordered list of matches (each stream will have a new property of matchProbability to show how well it matches)
>>> from music21 import *
>>> s = converter.parse("c4 d8 e16 FF a'4 b-", "4/4")
>>> o1 = converter.parse("c4 d8 e GG a' b-4", "4/4")
>>> o1.id = 'o1'
>>> o2 = converter.parse("d#2 f A a' G b", "4/4")
>>> o2.id = 'o2'
>>> o3 = converter.parse("c4 d e GG CCC r", "4/4")
>>> o3.id = 'o3'
>>> l = search.approximateNoteSearchNoRhythm(s, [o1, o2, o3])
>>> for i in l:
... print i.id, i.matchProbability
o1 0.83333333...
o3 0.5
o2 0.1666666...
searches the list of otherStreams and returns an ordered list of matches (each stream will have a new property of matchProbability to show how well it matches)
>>> from music21 import *
>>> s = converter.parse("c4 d8 e16 FF a'4 b-", "4/4")
>>> o1 = converter.parse("c4 d8 e GG a' b-4", "4/4")
>>> o1.id = 'o1'
>>> o2 = converter.parse("d#2 f A a' G b", "4/4")
>>> o2.id = 'o2'
>>> o3 = converter.parse("c4 d e GG CCC r", "4/4")
>>> o3.id = 'o3'
>>> l = search.approximateNoteSearchOnlyRhythm(s, [o1, o2, o3])
>>> for i in l:
... print i.id, i.matchProbability
o1 0.5
o3 0.33...
o2 0.0
searches the list of otherStreams and returns an ordered list of matches (each stream will have a new property of matchProbability to show how well it matches)
>>> from music21 import *
>>> s = converter.parse("c4 d8 e16 FF a'4 b-", "4/4")
>>> o1 = converter.parse("c4 d8 e GG2 a' b-4", "4/4")
>>> o1.id = 'o1'
>>> o2 = converter.parse("AAA4 AAA8 AAA16 AAA16 AAA4 AAA4", "4/4")
>>> o2.id = 'o2'
>>> o3 = converter.parse("c8 d16 e32 FF32 a'8 b-8", "4/4")
>>> o3.id = 'o3'
>>> o4 = converter.parse("c1 d1 e1 FF1 a'1 b-1", "4/4")
>>> o4.id = 'o4'
>>> l = search.approximateNoteSearchWeighted(s, [o1, o2, o3, o4])
>>> for i in l:
... print i.id, i.matchProbability
o3 0.83333...
o1 0.75
o4 0.75
o2 0.25
returns a sorted list of dictionaries of the most common rhythms in a stream where each dictionary contains:
number: the number of times a rhythm appears rhythm: the rhythm found (with the pitches of the first instance of the rhythm transposed to C5) measures: a list of measures containing the rhythm rhythmString: a string representation of the rhythm (see translateStreamToStringOnlyRhythm)
>>> from music21 import *
>>> bach = corpus.parse('bwv1.6')
>>> sortedRhythms = search.mostCommonMeasureRythms(bach)
>>> for dict in sortedRhythms[0:3]:
... print 'no:', dict['number'], 'rhythmString:', dict['rhythmString']
... print ' bars:', [(m.number, str(m.getContextByClass('Part').id)) for m in dict['measures']]
... dict['rhythm'].show('text')
... print '-----'
no: 34 rhythmString: PPPP
bars: [(1, 'Soprano'), (1, 'Alto'), (1, 'Tenor'), (1, 'Bass'), (2, 'Soprano'), ..., (19, 'Soprano')]
{0.0} <music21.note.Note C>
{1.0} <music21.note.Note A>
{2.0} <music21.note.Note F>
{3.0} <music21.note.Note C>
-----
no: 7 rhythmString: ZZ
bars: [(13, 'Soprano'), (13, 'Alto'), ..., (14, 'Bass')]
{0.0} <music21.note.Note C>
{2.0} <music21.note.Note A>
-----
no: 6 rhythmString: ZPP
bars: [(6, 'Soprano'), (6, 'Bass'), ..., (18, 'Tenor')]
{0.0} <music21.note.Note C>
{2.0} <music21.note.Note B->
{3.0} <music21.note.Note B->
-----
Takes two streams – the first is the stream to be searched and the second is a stream of elements whose rhythms must match the first. Returns a list of indices which begin a successful search.
searches are made based on quarterLength. thus an dotted sixteenth-note and a quadruplet (4:3) eighth will match each other.
Example 1: First we will set up a simple stream for searching:
>>> from music21 import *
>>> thisStream = tinyNotation.TinyNotationStream("c4. d8 e4 g4. a8 f4. c4.", "3/4")
>>> thisStream.show('text')
{0.0} <music21.meter.TimeSignature 3/4>
{0.0} <music21.note.Note C>
{1.5} <music21.note.Note D>
{2.0} <music21.note.Note E>
{3.0} <music21.note.Note G>
{4.5} <music21.note.Note A>
{5.0} <music21.note.Note F>
{6.5} <music21.note.Note C>
Now we will search for all dotted-quarter/eighth elements in the Stream:
>>> searchStream1 = stream.Stream()
>>> searchStream1.append(note.Note(quarterLength = 1.5))
>>> searchStream1.append(note.Note(quarterLength = .5))
>>> l = search.rhythmicSearch(thisStream, searchStream1)
>>> l
[1, 4]
>>> stream.Stream(thisStream[4:6]).show('text')
{3.0} <music21.note.Note G>
{4.5} <music21.note.Note A>
Slightly more advanced search: we will look for any instances of eighth, followed by a note (or other element) of any length, followed by a dotted quarter note. Again, we will find two instances; this time we will tag them both with a TextExpression of “*” and then show the original stream:
>>> searchStream2 = stream.Stream()
>>> searchStream2.append(note.Note(quarterLength = .5))
>>> searchStream2.append(search.Wildcard())
>>> searchStream2.append(note.Note(quarterLength = 1.5))
>>> l = search.rhythmicSearch(thisStream, searchStream2)
>>> l
[2, 5]
>>> for found in l:
... thisStream[found].lyric = "*"
>>> thisStream.show()
Now we can test the search on a real dataset and show the types of preparation that are needed to make it most likely a success. We will look through the first movement of Beethoven’s string quartet op. 59 no. 2 looking to see how much more common the first search term (dotted-quarter, eighth) is than the second (eighth, anything, dotted-quarter). In fact, my hypothesis was wrong, and the second term is actually more common than the first! (n.b. rests are being counted here as well as notes)
>>> op59_2_1 = corpus.parse('beethoven/opus59no2', 1)
>>> term1results = []
>>> term2results = []
>>> for p in op59_2_1.parts:
... pf = p.flat.stripTies() # consider tied notes as one long note
... temp1 = search.rhythmicSearch(pf, searchStream1)
... temp2 = search.rhythmicSearch(pf, searchStream2)
... for found in temp1: term1results.append(found)
... for found in temp2: term2results.append(found)
>>> term1results
[86, 285, 332, 432, 690, 1122, 1166, 1292, 21, 25, 969, 1116, 1151, 1252, 64, 252, 467, 688, 872, 1125, 1328, 1332, 1127]
>>> term2results
[243, 691, 692, 1080, 6, 13, 23, 114, 118, 280, 287, 288, 719, 726, 736, 1000, 1001, 1093, 11, 12, 118, 122, 339, 861, 862, 870, 1326, 1330, 26, 72, 78, 197, 223, 727, 1012, 1013]
>>> float(len(term1results))/len(term2results)
0.6388...
takes a note.Note object and translates it to a two-byte representation
currently returns the chr() for the note’s midi number. or chr(127) for rests followed by the log of the quarter length (fitted to 1-127, see formula below)
>>> from music21 import *
>>> n = note.Note("C4")
>>> n.duration.quarterLength = 3 # dotted half
>>> trans = search.translateDurationToBytes(n)
>>> trans
'_'
>>> (2**(ord(trans[0])/10.0))/256 # approximately 3
2.828...
takes a note.Note object and translates it to a single byte representation
currently returns the chr() for the note’s midi number. or chr(127) for rests
>>> from music21 import *
>>> n = note.Note("C4")
>>> search.translateNoteToByte(n)
'<'
>>> ord(search.translateNoteToByte(n)) == n.midi
True
takes a note.Note object and translates it to a two-byte representation
currently returns the chr() for the note’s midi number. or chr(127) for rests followed by the log of the quarter length (fitted to 1-127, see formula below)
>>> from music21 import *
>>> n = note.Note("C4")
>>> n.duration.quarterLength = 3 # dotted half
>>> trans = search.translateNoteWithDurationToBytes(n)
>>> trans
'<_'
>>> (2**(ord(trans[1])/10.0))/256 # approximately 3
2.828...
takes a stream of notesAndRests only and returns a string for searching on.
>>> from music21 import *
>>> s = converter.parse("c4 d8 r16 FF8. a'8 b-2.", "3/4")
>>> sn = s.flat.notesAndRests
>>> streamString = search.translateStreamToString(sn)
>>> print streamString
<P>F<)KQFF_
>>> len(streamString)
12
takes a stream of notesAndRests only and returns a string for searching on.
>>> from music21 import *
>>> s = converter.parse("c4 d e FF a' b-", "4/4")
>>> sn = s.flat.notesAndRests
>>> search.translateStreamToStringNoRhythm(sn)
'<>@)QF'
takes a stream of notesAndRests only and returns a string for searching on.
>>> from music21 import *
>>> s = converter.parse("c4 d8 e16 FF8. a'8 b-2.", "3/4")
>>> sn = s.flat.notesAndRests
>>> streamString = search.translateStreamToStringOnlyRhythm(sn)
>>> print streamString
PF<KF_
>>> len(streamString)
6
Inherits from: Music21Object, JSONSerializer
An object that may have some properties defined, but others not that matches a single object in a music21 stream. Equivalent to the regular expression ”.”
>>> from music21 import *
>>> wc1 = search.Wildcard()
>>> wc1.pitch = pitch.Pitch("C")
>>> st1 = stream.Stream()
>>> st1.append(note.HalfNote("D"))
>>> st1.append(wc1)
Inherits from: Duration, DurationCommon, JSONSerializer
a wildcard duration (it might define a duration in itself, but the methods here will see that it is a wildcard of some sort)
First positional argument is assumed to be type string or a quarterLength.