|
|
Created:
10 years, 5 months ago by Devon Schudy Modified:
10 years, 4 months ago CC:
lilypond-devel_gnu.org Visibility:
Public. |
DescriptionSupport articulations, slurs and breaths in MIDI.
Articulations and breaths can alter the length and volume of the note.
(Breaths affect the previous chord.) This is controlled by their
perform-length and perform-volume properties. The standard articulations
now have these properties where appropriate.
Notes in a slur overlap slightly in MIDI output. This approximates how
they're played on keyboards, and triggers the legato mode of many
synthesizers. The degree of overlap is controlled by the slurOverlap
context property.
Shorthand articulations indirect to their long versions. Previously a
shorthand (e.g. -.) looked up its name (dashDot) to get an articulation
name (staccato), and then created a new ArticulationEvent with that
name, ignoring the existing global (staccato). This meant changes to
the long names (such as adding properties) didn't affect the shorthand.
Now it looks up the articulation name to get the existing ArticulationEvent.
This makes c-. behave exactly like c\staccato.
Patch Set 1 #Patch Set 2 : Fix warning in Midi_note. #Patch Set 3 : Copy articulations in case they're modified. Declare properties. #Patch Set 4 : No more slurs. Use absolute times so time signature doesn't affect output. #Patch Set 5 : Property names with midi-; extra-velocity adds instead of multiplying; breaths don't shorten notes … #
Total comments: 1
Patch Set 6 : Allow articulation shorthands to be any post-event. #MessagesTotal messages: 39
Shouldn't Lilypond's MIDI output be as beautiful as its engraving output? This patch is a step toward that: it makes the most common articulations work in MIDI without requiring the use of articulate.ly and a separate \score. It also overlaps notes in slurs, which gives better results on legato instruments (e.g. winds and strings) than articulate.ly's approach of shortening unslurred notes. Since the meanings of articulations vary widely, their performance can be customized or disabled through properties on the articulation (and the parser is modified to make c-. equivalent to c\staccato, so setting its properties works). This is less convenient than it ought to be, especially for \breathe — is there a better way? Currently slurOverlap is a context property because I couldn't find a better place to put it, but maybe the other properties should also be on contexts so they're easier to set. This change doesn't interfere with articulate.ly, since the script's output doesn't contain any articulations. Problems: 1) Length calculations use written durations rather than absolute time, which is not ideal — switching from 2/2 to 2/4 shouldn't affect the sound at all. I think this can be handled by looking up the tempo and time signature. 2) Perform-length is not enough to support everything articulate.ly does — it can't express ornaments that add notes (e.g. \mordent) or change timing (e.g. swing). Ideally, articulations would be able to replace notes with arbitrary music, not just change the length. AFAICT this is hard to implement, because it requires feeding the replacements back into the performers without interfering with the ongoing iteration. (It also requires changing how Audio_item times are determined, so they can start at times other than now_mom(), but I have a solution for this.) An easier alternative would be to have the perform hook return a list of notes, but this doesn't cover \fermata, which needs to change the tempo.
Sign in to reply to this message.
Fix warning in Midi_note.
Sign in to reply to this message.
I am thankful for MIDI improvements in general, and for these features in particular. I’m not reviewing the code, but is one thing I want to throw a little cold water on. > \fermata, which needs to change the tempo Fermata performance is not necessarily uniform. In some music it just marks the end of each line of the poem, and it is left as a matter of interpretation whether to perform it as a long hold, an extra beat, a shortening of the note to breathe in tempo, or not at all. It can vary from line to line and even for the same line in different stanzas. If the performer will by default modify the tempo at fermatas, I hope that there will at least be a simple and documented way to suppress it at chosen points. I’m curious if this patch handles offset fermatas well, e.g. soprano = { e4 e’\fermata } alto = { b2\fermata } If these appear on different staves or partcombined, what happens to the tempo? In the context I have in mind, the soprano should change on the beat, unaffected by the alto’s fermata. > Notes in a slur overlap slightly in MIDI output. This approximates how > they're played on keyboards, and triggers the legato mode of many > synthesizers. The degree of overlap is controlled by the slurOverlap > context property. [Uh oh, here comes the urge to rant about pianists arranging cello parts. Must... resist...] As a string player, I wonder if it would be difficult to call out to a scheme function which receives two chords and returns the overlap, which can be negative to indicate that the transition must have a gap rather than an overlap. I imagine that other instruments also have their own limitations when it comes to slurring. Also, is overlap defined in units that are independent of tempo? To me it would seem unnatural for a slur between the same two notes to overlap longer at 40 bpm than it does at 120 bpm. I would want my overlap function to say “this requires a big shift, so perform it with a 125-ms gap” rather than having to figure out how to express the gap in terms of the current tempo. Thanks again, — Dan On Nov 15, 2013, at 04:30 , dschudy@gmail.com wrote: > Reviewers: , > > Message: > Shouldn't Lilypond's MIDI output be as beautiful as its engraving > output? > > This patch is a step toward that: it makes the most common articulations > work in MIDI without requiring the use of articulate.ly and a separate > \score. It also overlaps notes in slurs, which gives better results on > legato instruments (e.g. winds and strings) than articulate.ly's > approach of shortening unslurred notes. > > Since the meanings of articulations vary widely, their performance can > be customized or disabled through properties on the articulation (and > the parser is modified to make c-. equivalent to c\staccato, so setting > its properties works). This is less convenient than it ought to be, > especially for \breathe — is there a better way? Currently slurOverlap > is a context property because I couldn't find a better place to put it, > but maybe the other properties should also be on contexts so they're > easier to set. > > This change doesn't interfere with articulate.ly, since the script's > output doesn't contain any articulations. > > Problems: > > 1) Length calculations use written durations rather than absolute time, > which is not ideal — switching from 2/2 to 2/4 shouldn't affect the > sound at all. I think this can be handled by looking up the tempo and > time signature. > > 2) Perform-length is not enough to support everything articulate.ly does > — it can't express ornaments that add notes (e.g. \mordent) or change > timing (e.g. swing). Ideally, articulations would be able to replace > notes with arbitrary music, not just change the length. AFAICT this is > hard to implement, because it requires feeding the replacements back > into the performers without interfering with the ongoing iteration. (It > also requires changing how Audio_item times are determined, so they can > start at times other than now_mom(), but I have a solution for this.) An > easier alternative would be to have the perform hook return a list of > notes, but this doesn't cover \fermata, which needs to change the tempo. > > > Description: > Support articulations, slurs and breaths in MIDI. > > Articulations and breaths can alter the length and volume of the note. > (Breaths affect the previous chord.) This is controlled by their > perform-length and perform-volume properties. The standard articulations > now have these properties where appropriate. > > Notes in a slur overlap slightly in MIDI output. This approximates how > they're played on keyboards, and triggers the legato mode of many > synthesizers. The degree of overlap is controlled by the slurOverlap > context property. > > Shorthand articulations indirect to their long versions. Previously a > shorthand (e.g. -.) looked up its name (dashDot) to get an articulation > name (staccato), and then created a new ArticulationEvent with that > name, ignoring the existing global (staccato). This meant changes to > the long names (such as adding properties) didn't affect the shorthand. > > Now it looks up the articulation name to get the existing > ArticulationEvent. > This makes c-. behave exactly like c\staccato. > > Please review this at https://codereview.appspot.com/26470047/ > > Affected files (+106, -31 lines): > M lily/audio-item.cc > M lily/drum-note-performer.cc > M lily/include/audio-item.hh > M lily/midi-item.cc > M lily/note-performer.cc > M lily/parser.yy > M ly/music-functions-init.ly > M ly/script-init.ly > M scm/define-context-properties.scm > M scm/define-music-properties.scm > M scm/music-functions.scm > > > _______________________________________________ > lilypond-devel mailing list > lilypond-devel@gnu.org > https://lists.gnu.org/mailman/listinfo/lilypond-devel
Sign in to reply to this message.
Dan Eble <dan@faithful.be> writes: > [Uh oh, here comes the urge to rant about pianists arranging cello > parts. Must... resist...] > > As a string player, I wonder if it would be difficult to call out to a > scheme function which receives two chords and returns the overlap, > which can be negative to indicate that the transition must have a gap > rather than an overlap. I imagine that other instruments also have > their own limitations when it comes to slurring. I would expect that physical limitations of the execution of a particular instrument would be the task of the MIDI expander, with the MIDI signal being mostly instrument-independent (apart from preselecting a particular instrument). -- David Kastrup
Sign in to reply to this message.
On Nov 15, 2013, at 09:37 , David Kastrup <dak@gnu.org> wrote: > I would expect that physical limitations of the execution of a > particular instrument would be the task of the MIDI expander, with the > MIDI signal being mostly instrument-independent (apart from preselecting > a particular instrument). That is an argument against adding any overlap, right? — Dan
Sign in to reply to this message.
Dan Eble <dan@faithful.be> writes: > On Nov 15, 2013, at 09:37 , David Kastrup <dak@gnu.org> wrote: > >> I would expect that physical limitations of the execution of a >> particular instrument would be the task of the MIDI expander, with the >> MIDI signal being mostly instrument-independent (apart from preselecting >> a particular instrument). > > That is an argument against adding any overlap, right? Uh, no? If that's the usual representation of slurs in MIDI, why would it? It's probably not spelled out hard in any standard, but I expect that there are some de facto standards used for MIDI play. I have no significant experience with MIDI, so I have no actual knowledge about established representations of articulations. But I'm pretty confident that this is an area where LilyPond should try following conventions, so they exist, rather than being inventive. -- David Kastrup
Sign in to reply to this message.
Dan Eble wrote: > Fermata performance is not necessarily uniform. In some music it > just marks the end of each line of the poem, and it is left as a > matter of interpretation whether to perform it as a long hold, an > extra beat, a shortening of the note to breathe in tempo, or not at > all. It can vary from line to line and even for the same line in > different stanzas. This is why articulation performance needs to be customizable. Whatever Lilypond does won't always be right — but if users who care can override it, it doesn't have to be. Fortunately, a conservative interpretation (e.g. half-length staccatos; fermatas one beat longer) is much better than nothing for almost everyone. > If the performer will by default modify the tempo at fermatas, I > hope that there will at least be a simple and documented way to > suppress it at chosen points. > > I’m curious if this patch handles offset fermatas well, e.g. > > soprano = { e4 e’\fermata } > alto = { b2\fermata } This patch doesn't handle fermatas at all; they're just something to support in the future. Like all of these MIDI features, this could be disabled by changing the relevant property. Maybe \override should affect audio too, not just grobs, to make this easier: \override Fermata.perform = ##f > [Uh oh, here comes the urge to rant about pianists arranging cello > parts. Must... resist...] I'm actually a wind player, not a pianist, but MIDI is designed (and mostly used) for keyboards, so their interpretation is usually the best one to use in MIDI. Keyboard interpretation of slurs varies — sometimes it just suppresses the gap between notes, as in articulate.ly — but overlap is the one synthesizers recognize. > I wonder if it would be difficult to call out to a scheme function > which receives two chords and returns the overlap, which can be > negative to indicate that the transition must have a gap rather than > an overlap. Oh, good idea! This is more general than just changing the length of individual notes. It would cover: * the interpretation where there's a gap between unslurred notes * instructions like "sempre staccato" * the interpretation where there's a gap between repeated notes but not betweensuccessive different notes. * Slurs (with a negative gap) * Staccat(issim)o, portato, breaths If it's extended (later; Audio_item doesn't support this yet) to allow changing the start of the following note as well as end of the previous, then it could also handle: * swing (by altering only note pairs that are aligned with a beat) * time-stealing tenuto * starting notes with slow attacks (e.g. low notes on winds and strings) earlier Design questions: * Should it return a gap or a duration? I think the latter is convenient a little more often. * Should slurs use a different duration function or should it take slur status be a parameter? The latter saves a context property, and covers the staccato-under-slur notation for portato. * Should it take chords or individual notes? The latter is easier to use, and supports partially tied chords. Does the gap between notes ever depend on other notes in the chord? * If it takes individual notes, should it take NoteEvents or just pitch and duration? The latter is more convenient but less general. > Also, is overlap defined in units that are independent of tempo? To > me it would seem unnatural for a slur between the same two notes to > overlap longer at 40 bpm than it does at 120 bpm. I would want my > overlap function to say “this requires a big shift, so perform it > with a 125-ms gap” rather than having to figure out how to express > the gap in terms of the current tempo. It isn't independent of tempo but it probably should be. I used written durations because that's what Lilypond uses internally, and because breaths and staccatos sound better if they align with the rhythm. However, staccatissimo and gap/overlap want absolute durations. Maybe it should be expressed in written durations, but there should be a ly:milliseconds convenience function to express absolute times in moments. (Does this require a context argument?) So eventually, typical use (e.g. for a more staccato style: very short staccatos, half-length portatos, and gaps between unslurred notes) would look something like: staccato = (make-articulation "staccato" 'perform-duration (lambda (pitch next-pitch duration slur?) (ly:milliseconds (if (< (ly:pitch-octave pitch) -1) 40 30)))) portato = (make-articulation "portato" 'perform-duration (lambda (pitch next-pitch duration slur?) (let ((beat (ly:get-context-property context 'tempo-unit))) (ly:moment-mul (if (ly:moment<? duration beat) duration beat) (ly:make-moment 1 2))))) ;;I wish the ordinary numeric operators worked on moments \set Staff #'perform-duration = (lambda (pitch next-pitch duration slur?) (if slur? duration (ly:moment-sub duration (ly:milliseconds 20))))
Sign in to reply to this message.
On Nov 15, 2013, at 19:18 , Devon Schudy <dschudy@gmail.com> wrote: > Dan Eble wrote: >> Fermata performance is not necessarily uniform. >> . . . > > This is why articulation performance needs to be customizable. > . . . OK, simply I misinterpreted the text of your message to mean that you implemented fermata performance as a tempo change. >> [Uh oh, here comes the urge to rant about pianists arranging cello >> parts. Must... resist...] > > I'm actually a wind player, not a pianist > . . . I didn’t mean you specifically. :) > Design questions: > * Should it return a gap or a duration? I think the latter is > convenient a little more often. > * Should slurs use a different duration function or should it take > slur status be a parameter? The latter saves a context property, > and covers the staccato-under-slur notation for portato. > * Should it take chords or individual notes? The latter is easier to > use, and supports partially tied chords. Does the gap between notes > ever depend on other notes in the chord? > * If it takes individual notes, should it take NoteEvents or just > pitch and duration? The latter is more convenient but less general. Well, since you’re asking -- I certainly wouldn’t bother to go this far if I were implementing it for my own use -- I think the you’d end up with more realistic output by maintaining a model of the current articulation state (hand position, fingering, etc.) and use a knowledge of the nature of the instrument (limited polyphony, sustain by rolling, crook changes, etc.) to search for optimal ways to change the articulation state from event to event, looking ahead to future events, and constrained by * the explicit articulations expressed in the ly (fingerings, bowings, breath marks, page turns, etc.) * the configured ability of the players (span of the hand, pulmonary function, number of fingers and thumbs, etc.) But seriously: No matter how much Lilypond improves its MIDI output, I think that MIDI playback of anything other than keyboard and percussion instruments will always cause various parts of my body to clench. For my own use, I wouldn’t consider look-ahead beyond the next pitch worth the increased complexity. I do think it would make some difference if there were a rest between the current and next pitch. For strings, I would think that the way that chords of 3 or more notes are broken (it’s hard to play more than 2 strings at a time) has more bearing on realism than minute adjustments of overlap/gap due to fingering. (That seems like a much different problem though.) >> Also, is overlap defined in units that are independent of tempo? >> . . . > > It isn't independent of tempo but it probably should be. > . . . > (lambda (pitch next-pitch duration slur?) > (ly:milliseconds (if (< (ly:pitch-octave pitch) -1) 40 30)))) Or (ly:seconds (... 0.04 0.03))? — Dan
Sign in to reply to this message.
Dan Eble <dan@faithful.be> writes: > But seriously: No matter how much Lilypond improves its MIDI output, I > think that MIDI playback of anything other than keyboard and > percussion instruments will always cause various parts of my body to > clench. MIDI _is_ _data_. It is the task of the MIDI expander to simulate an actual _instrument_. -- David Kastrup
Sign in to reply to this message.
[Taking the message public again after a previous reply-to-sender.] On Nov 16, 2013, at 12:57, David Kastrup <dak@gnu.org> wrote: > LilyPond's MIDI output is not supposed to approximate the actions of a > competent human player. LilyPond's MIDI output is supposed to provide > the melodic information for a MIDI expander or MIDI sequencer. > >> I can’t tell if you’re against Devon's patch, or against the >> enhancements I have suggested, > > I repeat: your enhancements have no place in the MIDI output of > LilyPond. LilyPond is not an instrument simulator even if we are > looking at MIDI output. If it were, its output would have to be a sound > file. What you ask for belongs in the MIDI expander, not in the MIDI > generator. > >> . . . > > MIDI is not supposed to be realistic. MIDI is supposed to provide the > necessary melodic information for a MIDI expander to convert it into > sound. This information is not encoded differently for different > instruments. There may be instrument-specific controllers (like the > rather open-minded "Expression") which one can use to convey additional > information. But that's not part of the basic information transmitted > in the normal channels using the normal controllers. I think I understand what you are saying; but if I do, it sounds like you do not understand me. I suggest that if a thousand pianists sat down in turn at a keyboard controller with a difficult piece of music, and we said, “NO EMOTION: Play like a machine!” and we recorded their performances to MIDI files; then separately we had Lilypond produce a MIDI file of the piece containing the “necessary melodic information”; the results would differ. I suggest that there would be certain things that most of the human performances had in common, such as releasing certain notes a little earlier than the score called for in order to prepare for upcoming notes. And I maintain that because all the files are MIDI files, it is reasonable to say that there is room for the Lilypond MIDI output to be more realistic. It’s not reaching beyond MIDI to nudge the quality of Lilypond output in that direction; it doesn’t require sound-file output. Regards, — Dan
Sign in to reply to this message.
Dan Eble <dan@faithful.be> writes: > I think I understand what you are saying; but if I do, it sounds like > you do not understand me. Or the other way round. > I suggest that if a thousand pianists sat down in turn at a keyboard > controller with a difficult piece of music, and we said, “NO EMOTION: > Play like a machine!” and we recorded their performances to MIDI > files; then separately we had Lilypond produce a MIDI file of the > piece containing the “necessary melodic information”; the results > would differ. > > I suggest that there would be certain things that most of the human > performances had in common, such as releasing certain notes a little > earlier than the score called for in order to prepare for upcoming > notes. And I maintain that because all the files are MIDI files, it > is reasonable to say that there is room for the Lilypond MIDI output > to be more realistic. LilyPond MIDI output is not intended to "realistically" reflect the performance of a human player trying to play like a machine. That's not useful for anything. LilyPond MIDI output is intended to realistically reflect the performance of a machine. For better or rather worse, it's the main interchange format we have with other music software, including other music typesetters, MIDI sequencers, and software intended to make music sound like played by a human player. It can serve as a proofhearing aid or a practice aid. It is not intended to serve as a substitute for a player when recording. For that, LilyPond produces sheet music fit for running through a human. > It’s not reaching beyond MIDI to nudge the quality of Lilypond output > in that direction; it doesn’t require sound-file output. You can nudge the quality of a refrigerator in the direction of an oven, but that does not mean that you arrive at something that will do anything well. -- David Kastrup
Sign in to reply to this message.
Just that Devon does not get a wrong impression: I applaud every effort to make LilyPond's MIDI output better reflect its input, and this work is a great step in that direction. Dan, however, suggests to make LilyPond's output to reflect not the music but an imaginary performance by a human player, making the MIDI worse suitable for proofhearing. And that's a direction that makes no sense, because there is other specialized software for that job. LilyPond's MIDI output is intended to convey information, not emotion. It would be an ambitious task to "properly" convey "con fuoco" or "molto triste" like a human player would, but the MIDI way would be to crank up or down the speed and the expression controller.
Sign in to reply to this message.
On 2013/11/16 00:18:40, Devon Schudy wrote: > Dan Eble wrote: > I'm actually a wind player, not a pianist, but MIDI is designed (and > mostly used) for keyboards, so their interpretation is usually the > best one to use in MIDI. Keyboard interpretation of slurs varies — > sometimes it just suppresses the gap between notes, as in > articulate.ly — but overlap is the one synthesizers recognize. > > > Also, is overlap defined in units that are independent of tempo? To > > me it would seem unnatural for a slur between the same two notes to > > overlap longer at 40 bpm than it does at 120 bpm. I would want my > > overlap function to say “this requires a big shift, so perform it > > with a 125-ms gap” rather than having to figure out how to express > > the gap in terms of the current tempo. > > It isn't independent of tempo but it probably should be. If you say "overlap is the one synthesizers recognize": does that mean that there needs to be a physical gap, or is it sufficient if the note-on command of the next note comes before the note-off command of the previous note in the MIDI data, without any intervening time gap? That would probably make it easy to decide on an output (apart from slurred identical notes which already provide a conundrum for the player) while probably making it non-trivial to code.
Sign in to reply to this message.
On Nov 17, 2013, at 01:30 , David Kastrup <dak@gnu.org> wrote: > It can serve as a proofhearing aid or a practice aid. It is not > intended to serve as a substitute for a player when recording. For > that, LilyPond produces sheet music fit for running through a human. This rebuttal counters what I never claimed as a goal. Didn’t I express my belief that going overboard with simulation was not worth the trouble because no amount of tweaking the MIDI would make the final synthesized result natural enough for my taste? — Dan
Sign in to reply to this message.
On Nov 17, 2013, at 01:50 , Werner LEMBERG <wl@gnu.org> wrote: > >> And that's a direction that makes no sense, because there is other >> specialized software for that job. LilyPond's MIDI output is intended >> to convey information, not emotion. > > Not necessarily. I can imagine that we have an include file > `beginner.ly', which makes lilypond's output sound like three cats > being tortured all the time. And it would help the amateur pianist arranging for the amateur cellist, and the amateur cellist arranging for the amateur clarinetist, and many others. — Dan
Sign in to reply to this message.
Hi Devon, Like some other posters, I have not yet reviewed your changes in detail, but would like to make some comments on your patch at the design/concept level. First of all, many thanks for looking at the midi code - there are too few people apart from Jan and I think David K who have much idea of what happens in there. On 2013/11/15 09:30:11, Devon Schudy wrote: > Shouldn't Lilypond's MIDI output be as beautiful as its engraving output? > Hmmm... yes and no. LilyPond isn't intended to be a sequencer or an audio patch editor, and it doesn't do as much on the audio playback front as some of the WYSIWYG score editors, whether they are open source projects like Denemo or paid, closed software such as Finale, Sibelius and friends. For LilyPond the aims are more limited: audio playback should provide a means of proof-hearing what you are trying to produce on the printed output. If there isn't a means of writing it on a score, you *probably* won't need to implement it immediately in the midi performer and/or the audio set of back-ends which contains MIDI. > This patch is a step toward that: it makes the most common articulations work in > MIDI without requiring the use of articulate.ly and a separate \score. Good. You'll need small steps here as the code here is pretty heavily optimised OOPS coding, most of which is Jan's. If you're planning on a patch for this maybe raise an issue for it in the tracker system. Also, if your aim is to implement the articulate.ly audio effects without having to use articulate.ly work-round method of using a parallel \score block that's a good aim, too. If you can do this, the next goal for this sub-project may be to get the audio playback to honour the \repeat structures by translating the volta and tremolo flavours to unfold. I looked at this briefly but soon retired from the fray, gibbering. Until there's a way of doing this, articulate.ly will still be needed. I think there was some discussion later on in the thread about producing audio output sounding as if played by machine or humans. I reckon "Human" playback is beyond the scope of the LilyPond project, but a more accurate audio representation of what appears on the output score is definitely on-piste. Another performer-type issue - very definitely separate from this patch - is transposed audio output. Best to note the interaction of the audio output and the \transpose and \transposition commands as a TODO - the issue is does Lily apply pitch-bends to effect audio transposition to implement respect a \transposition command and how does this play nicely when \transpose is used for the printed output? >It also overlaps notes in slurs, which gives better results on legato instruments (e.g. > winds and strings) than articulate.ly's approach of shortening unslurred notes. Looks like a good approach. Are there any negative results for percussive and plucked instruments when using this approach? > > Since the meanings of articulations vary widely, their performance can be > customized or disabled through properties on the articulation (and the parser is > modified to make c-. equivalent to c\staccato, so setting its properties works). > This is less convenient than it ought to be, especially for \breathe — is there > a better way? Currently slurOverlap is a context property because I couldn't > find a better place to put it, but maybe the other properties should also be on > contexts so they're easier to set. > > This change doesn't interfere with articulate.ly, since the script's output > doesn't contain any articulations. > Is co-existence and inter-operation with articulate.ly an overall goal for this set of patches? > Problems: > > 1) Length calculations use written durations rather than absolute time, which is > not ideal — switching from 2/2 to 2/4 shouldn't affect the sound at all. I think > this can be handled by looking up the tempo and time signature. > > 2) Perform-length is not enough to support everything articulate.ly does — it > can't express ornaments that add notes (e.g. \mordent) or change timing (e.g. > swing). Ideally, articulations would be able to replace notes with arbitrary > music, not just change the length. AFAICT this is hard to implement, because it > requires feeding the replacements back into the performers without interfering > with the ongoing iteration. (It also requires changing how Audio_item times are > determined, so they can start at times other than now_mom(), but I have a > solution for this.) An easier alternative would be to have the perform hook > return a list of notes, but this doesn't cover \fermata, which needs to change > the tempo. Or have the hook scale up the durations of the notes you return by a factor you tune via a property for all fermatae? Hope these comments make sense, Devon, (I'm another 'mere' wind player and beginner pianist). Cheers, Ian
Sign in to reply to this message.
Copy articulations in case they're modified. Declare properties.
Sign in to reply to this message.
dak@gnu.org wrote: > If you say "overlap is the one synthesizers recognize": does that > mean that there needs to be a physical gap, or is it sufficient if > the note-on command of the next note comes before the note-off > command of the previous note in the MIDI data, without any > intervening time gap? I think the MIDI command just has to arrive later, but this is a slightly obscure feature, so actual synthesizers might do something else. However, this is probably irrelevant, because synthesizers recognize overlap only if legato/portamento mode is on — and it usually isn't on by default, because it misinterprets chords. So the slurry sound of overlapping notes is just due to overlapping sound, not to special synthesizer support. Maybe Lilypond could emit the command to turn on portamento mode when there are no chords; I haven't tried this. Slur overlap doesn't improve the sound nearly as much as articulations, and breaks midi roundtripping, and the current implementation is ad-hoc, so maybe it shouldn't be included. How does the parser create slurs? They seem to be post-events, but I don't see where they're defined. It would be more flexible to do slurs with a perform-length property on the SlurEvent, and that would generalize to other spanners.
Sign in to reply to this message.
On 2013/11/18 13:34:33, Devon Schudy wrote: > mailto:dak@gnu.org wrote: > > If you say "overlap is the one synthesizers recognize": does that > > mean that there needs to be a physical gap, or is it sufficient if > > the note-on command of the next note comes before the note-off > > command of the previous note in the MIDI data, without any > > intervening time gap? > > I think the MIDI command just has to arrive later, but this is a > slightly obscure feature, so actual synthesizers might do something > else. > > However, this is probably irrelevant, because synthesizers recognize > overlap only if legato/portamento mode is on — and it usually isn't on > by default, because it misinterprets chords. So the slurry sound of > overlapping notes is just due to overlapping sound, not to special > synthesizer support. Maybe Lilypond could emit the command to turn on > portamento mode when there are no chords; I haven't tried this. I don't think LilyPond should do something like that unless very specifically told so (and not sure it is a good idea then): the output should be a good basis for further processing with MIDI tools. Sound synthesis is only the last step. So the question is likely more how other notation programs view/produce the MIDI (for better or worse, one of the more important formats for getting LilyPond-produced music into other notation programs). The most important thing is to have a good understanding between us and other tools manipulating MIDI. > Slur overlap doesn't improve the sound nearly as much as > articulations, and breaks midi roundtripping, and the current > implementation is ad-hoc, so maybe it shouldn't be included. If you call the current implementation ad-hoc, then it's probably at least a good idea to move it to a different issue. > How does the parser create slurs? They seem to be post-events, but I > don't see where they're defined. ly/declarations-init.ly:81:"(" = #(make-span-event 'SlurEvent START) They are no longer visible separately in the parser (see issue 3487). The user can redefine them to anything else as if they were \some-identifier > It would be more flexible to do slurs > with a perform-length property on the SlurEvent, and that would > generalize to other spanners. I don't think it makes sense to move LilyPond in that direction: we want clearly recognizable output. And even if we wanted to: at the current point of time, the MIDI phase has no access to a property system comparable to the grob property system. That means that all such finetuning data would need to get designed onto events or into generic context properties. So any _local_ specialization of execution would have to be done in a dissimilar approach to the typesetting. We would not be doing the complexity of our user/programming interfaces any favors with that. So "tweakable MIDI" where we are talking about local tweaks in my opinion should only be tackled once we have moved to GUILE2 _and_ we have reworked the context property system in a manner where \layout and \midi work in comparable ways at least from the user interfaces. Otherwise we'll get another large diversion of interfaces that will have significant consolidation costs once the necessary facilities will trickle in.
Sign in to reply to this message.
No more slurs. Use absolute times so time signature doesn't affect output.
Sign in to reply to this message.
Looks good, as far as I have time to review. I might propose a follow-up patch, to clarify in define-music-properties.scm that the 'volume' property affects the "note-on-velocity", as it should to represent an accent, rather than the "channel volume" of the channel. People who care about the midi output are likely to care about that distinction. I would need, however, an appropriate new name for the property, and the best I can think of now is 'strength'.
Sign in to reply to this message.
> I would need, however, an appropriate new name for the property, and the best I > can think of now is 'strength'. 'attack'?
Sign in to reply to this message.
On 2013/11/23 07:25:31, benko.pal wrote: > > I would need, however, an appropriate new name for the property, and the best > I > > can think of now is 'strength'. > > 'attack'? 'velocity'. There is nothing to be gained by inventing different names from the MIDI terminology but confusion.
Sign in to reply to this message.
k-ohara5a5a@oco.net wrote: > I might propose a follow-up patch, to clarify in > define-music-properties.scm that the 'volume' property affects the > "note-on-velocity", as it should to represent an accent, rather than the > "channel volume" of the channel. People who care about the midi output > are likely to care about that distinction. > > I would need, however, an appropriate new name for the property, and the > best I can think of now is 'strength'. Is “relative-volume” clearer? Also, should it have something like “perform” in its name (or “midi”, like the midi* context properties) to make it easy for users looking for a layout property to ignore it? The trouble with calling it ‘velocity’ is that dynamics currently also set velocity rather than channel volume, which is why they can't change the volume in the middle of a note. (There's code for using the channel volume, but it's disabled, apparently because of bugs?) Also users might expect velocity to be an absolute MIDI velocity of 1 to 127 rather than something relative to the dynamic. Should the type predicate for numeric properties be ‘real?’ or ‘number?’? I used ‘real?’ (since complex numbers don't make sense as volumes), but the existing properties use ‘number?’, which may be more comprehensible to users. I'm working on supporting multi-note articulations like trills and mordents. For them, perform-note needs to take a note-event, not just a length, so it knows the pitch. (It can return a length, or a list of note-events.) Or should perform-length be separate from perform-note? That would make the behavior of notes with multiple midi-affecting articulations more predictable. The most common combination is accent + anything else, which works fine, because multiple volume properties multiply. Combinations like \prall\staccato are unpredictable, though — the result depends on which articulation comes first. If perform-length is separate from perform-note, perform-length can just apply first. I found two bugs in the current patch: 1) A breath after a rest (or staccato) mistakenly shortens the last note, just like grace notes. 2) Looking up articulation names in the environment breaks if someone has a variable called “staccato” — which might happen if they're naming variations: plain, inverted, dotted, staccato. Redefining standard variables is not proper, but it's currently safe, so it would be nice to not break it. A cleaner alternative is to simply define dashDot = \staccato, leaving it to the user to replace both if they want to redefine an articulation.
Sign in to reply to this message.
On Sat, 23 Nov 2013 08:31:17 -0800, Devon Schudy <dschudy@gmail.com> wrote: > Is “relative-volume” clearer? Also, should it have something like > “perform” in its name (or “midi”, like the midi* context properties) > to make it easy for users looking for a layout property to ignore it? Adding 'midi-' would help. > The trouble with calling it ‘velocity’ is that dynamics currently also > set velocity rather than channel volume, which is why they can't > change the volume in the middle of a note. (There's code for using the > channel volume, but it's disabled, apparently because of bugs?) Also > users might expect velocity to be an absolute MIDI velocity of 1 to > 127 rather than something relative to the dynamic. Using the word 'velocity' in the name of the property that affects the MIDI "note-on-velocity" should not cause confusion. Adding 'relative-', or the doc-string you already have, or seeing how the property is used with accents, should prevent misunderstanding as the absolute "note-on-velocity". In version 2.12, dynamics were implemented as channel volume. The volume did not change in the middle of the note, however, because there was no system to generate a midi event in the middle of the note. Sometimes two lines of music share a midi channel (if we need all 16 channels for different instruments) and Jan added an option to do that automatically. But then when left- and right-hands of piano, for example, shared a MIDI channel but had different dynamics, they would fight over the single volume setting. Jan moved the dynamic implementation to note-on-velocity. MIDI note-on-velocity is good for dynamics. I think several midi players adjust the timbre based on that velocity. Crescendos can still be implemented with the channel 'volume', or better the 'expression', if we add a mechanism to send a few control-change events through the crescendo.
Sign in to reply to this message.
"Keith OHara" <k-ohara5a5a@oco.net> writes: > Using the word 'velocity' in the name of the property that affects the > MIDI "note-on-velocity" should not cause confusion. Adding > relative-', or the doc-string you already have, or seeing how the > property is used with accents, should prevent misunderstanding as the > absolute "note-on-velocity". Instead of "relative" (which has other connotations in LilyPond), we should rather use the established "extra-" (like in extra-offset). So it would be extra-velocity. -- David Kastrup
Sign in to reply to this message.
David Kastrup wrote: > So it would be extra-velocity. That makes it sound like it's added to velocity instead of multiplied — but actually it should be added, because it gives a more consistent effect across dynamics. Multiplying a quiet note's velocity by 1.2 gives hardly any accent, but adding 20 sounds about the same at \pp and \ff. I'll change it. Ian Hulin wrote: > Also, if your aim is to implement the articulate.ly audio effects > without having to use articulate.ly work-round method of using a > parallel \score block that's a good aim, too. That's the idea. > the next goal for this sub-project may be to get the audio playback > to honour the \repeat structures by translating the volta and > tremolo flavours to unfold. I want this too, but apparently it was discussed previously, and some users don't want voltas automatically unfolded, because they don't want to hear them twice when proofreading. > Another performer-type issue - very definitely separate from this > patch - is transposed audio output. Best to note the interaction of > the audio output and the \transpose and \transposition commands as a > TODO - the issue is does Lily apply pitch-bends to effect audio > transposition to implement respect a \transposition command and how > does this play nicely when \transpose is used for the printed > output? Audio_note just uses Pitch::transposed; it doesn't do anything special about \transpose. IIUC the only problem with their interaction is that \transpose mistakenly transposes \transposition. [on overlapping notes in slurs] > Are there any negative results for percussive and plucked instruments > when using this approach? For undamped instruments like harp and most drums, no — these instruments generally ignore the note-off event, so it doesn't matter when it comes. For guitar-like instruments, the result is not a significant improvement, because the notes are still replucked instead of just fingered. I don't think MIDI can do pull-offs without either portamento-mode or pitch-bend abuse. > Is co-existence and inter-operation with articulate.ly an overall > goal for this set of patches? Not specifically, but it would be annoying if it broke articulate.ly (by e.g. shortening staccatos twice), since so many people use it. > Or have the hook scale up the durations of the notes you return by a > factor you tune via a property for all fermatae? Doing fermatas by changing the durations would require delaying all later notes. This isn't supported yet, but might not be terribly hard to add.
Sign in to reply to this message.
Property names with midi-; extra-velocity adds instead of multiplying; breaths don't shorten notes before rests.
Sign in to reply to this message.
On 2013/11/24 15:11:38, Devon Schudy wrote: > David Kastrup wrote: > > > the next goal for this sub-project may be to get the audio playback > > to honour the \repeat structures by translating the volta and > > tremolo flavours to unfold. > > I want this too, but apparently it was discussed previously, and some > users don't want voltas automatically unfolded, because they don't > want to hear them twice when proofreading. Perhaps do it conditionally based on some context property? > Audio_note just uses Pitch::transposed; it doesn't do anything special > about \transpose. IIUC the only problem with their interaction is that > \transpose mistakenly transposes \transposition. Oh no, we fixed that in issue 754 in February, 2.17.13. > > Or have the hook scale up the durations of the notes you return by a > > factor you tune via a property for all fermatae? > > Doing fermatas by changing the durations would require delaying all > later notes. This isn't supported yet, but might not be terribly > hard to add. A factor is the wrong thing to do as then polyphonic passages will get out of sync when the last note at the fermata has a different length in different voices. I think that using one beat or a fraction of a measure should likely be used.
Sign in to reply to this message.
2013/11/24 <dak@gnu.org>: > On 2013/11/24 15:11:38, Devon Schudy wrote: > >> David Kastrup wrote: >> > the next goal for this sub-project may be to get the audio playback >> > to honour the \repeat structures by translating the volta and >> > tremolo flavours to unfold. >> >> I want this too, but apparently it was discussed previously, and some >> users don't want voltas automatically unfolded, because they don't >> want to hear them twice when proofreading. > > Perhaps do it conditionally based on some context property? I would like that! I'm one of the users who'd like the music to be automatically unfolded (in fact, i don't understand why would someone not want to have at least tremolo repeats unfolded). Janek
Sign in to reply to this message.
On 2013/11/24 15:40:01, janek wrote: > 2013/11/24 <dak@gnu.org>: > > On 2013/11/24 15:11:38, Devon Schudy wrote: > > > >> David Kastrup wrote: > >> > the next goal for this sub-project may be to get the audio playback > >> > to honour the \repeat structures by translating the volta and > >> > tremolo flavours to unfold. > >> > >> I want this too, but apparently it was discussed previously, and some > >> users don't want voltas automatically unfolded, because they don't > >> want to hear them twice when proofreading. > > > > Perhaps do it conditionally based on some context property? 1+ (see also comment below) > > I would like that! I'm one of the users who'd like the music to be > automatically unfolded (in fact, i don't understand why would someone > not want to have at least tremolo repeats unfolded). > 1+ Thinking about the context property David suggested, (if it were called repeat-performance, for example) would we could have three settings for it #'all (expand volta, tremolo, percent and unfold for the performer) #'notvolta (expand only tremolo, percent and unfold for the performer) #'unfold (as now, expand only unfold for the performer) with repeat-performance=#'all as the default, and people who like the current behaviour able to set it to #'unfold Cheers, Ian
Sign in to reply to this message.
2013/11/26 <ianhulin44@gmail.com>: > Thinking about the context property David suggested, (if it were called > repeat-performance, for example) > would we could have three settings for it > #'all (expand volta, tremolo, percent and unfold for the performer) > #'notvolta (expand only tremolo, percent and unfold for the performer) > #'unfold (as now, expand only unfold for the performer) > with repeat-performance=#'all as the default, and people who like the > current behaviour able to set it to > #'unfold +1
Sign in to reply to this message.
On 2013/11/26 17:54:57, Ian Hulin (gmail) wrote: > On 2013/11/24 15:40:01, janek wrote: > > 2013/11/24 <dak@gnu.org>: > > > On 2013/11/24 15:11:38, Devon Schudy wrote: > > > > > >> David Kastrup wrote: > > >> > the next goal for this sub-project may be to get the audio playback > > >> > to honour the \repeat structures by translating the volta and > > >> > tremolo flavours to unfold. > > >> > > >> I want this too, but apparently it was discussed previously, and some > > >> users don't want voltas automatically unfolded, because they don't > > >> want to hear them twice when proofreading. > > > > > > Perhaps do it conditionally based on some context property? > 1+ > (see also comment below) > > > > I would like that! I'm one of the users who'd like the music to be > > automatically unfolded (in fact, i don't understand why would someone > > not want to have at least tremolo repeats unfolded). > > > 1+ > Thinking about the context property David suggested, (if it were called > repeat-performance, for example) > would we could have three settings for it > #'all (expand volta, tremolo, percent and unfold for the performer) > #'notvolta (expand only tremolo, percent and unfold for the performer) > #'unfold (as now, expand only unfold for the performer) > with repeat-performance=#'all as the default, and people who like the current > behaviour able to set it to > #'unfold I don't see why this should not just be a list of symbols (and it seems absurd not to unfold unfold). So #'(volta tremolo percent) #'(tremolo percent) #'() If you want to have unfold unfolded conditionally, then add unfold to each of those lists.
Sign in to reply to this message.
On Nov 26, 2013, at 13:12 , dak@gnu.org wrote: > On 2013/11/26 17:54:57, Ian Hulin (gmail) wrote: >> On 2013/11/24 15:40:01, janek wrote: >> Thinking about the context property David suggested, (if it were > called >> repeat-performance, for example) >> would we could have three settings for it >> #'all (expand volta, tremolo, percent and unfold for the performer) >> #'notvolta (expand only tremolo, percent and unfold for the performer) >> #'unfold (as now, expand only unfold for the performer) >> with repeat-performance=#'all as the default, and people who like the > current >> behaviour able to set it to >> #'unfold > > I don't see why this should not just be a list of symbols (and it seems > absurd not to unfold unfold). > > So > #'(volta tremolo percent) > #'(tremolo percent) > #'() > > If you want to have unfold unfolded conditionally, then add unfold to > each of those lists. In 2.16, the MIDI output for repeats with alternatives seems to perform them all sequentially. I can understand how someone who just wants to proof-listen in as little time as possible would want it to work as it does, but it is strangely different from the expected result of telling a musician “Don’t take the repeats.” Is there sense in allowing a distinction between “don’t perform volta repeats; perform only the final alternative” and “don’t perform volta repeats, but perform all alternatives”? — Dan
Sign in to reply to this message.
On Nov 26, 2013, at 13:12 , dak@gnu.org wrote: > I don't see why this should not just be a list of symbols (and it seems > absurd not to unfold unfold). Isn’t it equally absurd not to unfold percent repeats? The difference is only visual, right? — Dan
Sign in to reply to this message.
It seems that the patches were somehow overlooked, but i've pushed them now. I apologize for the delay, Devon, and thank you for your work! You can mark this issue as closed. Janek a42a4f9 Support articulations and breaths in MIDI. (issue 3664) 6baa453 Support properties on articulations.
Sign in to reply to this message.
Message was sent while issue was closed.
It took me until reading the commit message to figure out that this is actually meddling with the parser. I really should do more reviewing. It might have made sense to put the parser part, which appears reasonably independent, into a separate issue to make it more visible. The parser is notably tricky, and there are a lot of things where one can go wrong with juggling music expressions. Not that it appears that you did. There is, however, one restriction that does not appear to serve any purpose apart from being a restriction. https://codereview.appspot.com/26470047/diff/70001/lily/parser.yy File lily/parser.yy (right): https://codereview.appspot.com/26470047/diff/70001/lily/parser.yy#newcode2821 lily/parser.yy:2821: } else if (ly_prob_type_p (s, ly_symbol2scm ("ArticulationEvent"))) { I think there is no point in restricting the type to an ArticulationEvent. Just use a check for it being music and is_mus_type ("post-event") here. Supporting event functions as well (similar to how shorthands like |, (, ), and so on work) would be a nice gesture, but I don't see a nice scheme for that right now.
Sign in to reply to this message.
Allow articulation shorthands to be any post-event.
Sign in to reply to this message.
dak@gnu.org wrote: > There is, however, one restriction that does not appear to serve any > purpose apart from being a restriction. Worse, it didn't even work, because ly_prob_type_p returns a Scheme boolean, so it's always true. :( Anything, even non-music, was allowed. > lily/parser.yy:2821: } else if (ly_prob_type_p (s, ly_symbol2scm > ("ArticulationEvent"))) { > I think there is no point in restricting the type to an > ArticulationEvent. Just use a check for it being music and is_mus_type > ("post-event") here. OK, done.
Sign in to reply to this message.
|