How to map note velocity to plugin parameter (for ALL keyboard notes)

There are few plugins that accept as input a MIDI message for parameter automation.
Generally, you have to set the parameters through the control interface (VST, AU, …).

You must use the CTRL-middle_click “Operate Controller now” function as indicated in https://community.ardour.org/generic_midi_control

It only works with MIDI events. If only OSC is enabled then the function is disabled.

In any case, “Operate Controller now” allows you to set up a simple and non-shared control.

Probably you can define a more sophisticated behavior within the Ardour configuration files.

Are the specifications for low-level file configuration documented?

Regards,

We don't provide any mechanism to use note velocity in this way. Note velocity is rarely a repeatable parameter. You should be binding MIDI CC events, not note events, since the last are assumed only useful for toggle controls (switches, buttons etc).

Everything about binding maps is in the manual: http://manual.ardour.org/using-control-surfaces/midi-binding-maps/

Anything left out is in the source code.

It is not true that OSC disables MIDI learn.

Thanks Pail, I understand.
Then I must translate the midi event ( NOTEON(velocity) ==>> CCnnn(velocity) ) through mididings (Linux) or Bome MIDI Translator Pro (OSX). With this trick I can solve the problem.
I have observed that ctrl+middle_click only appears if Generic MIDI Control Surfaces is enabled. (I think it will work with OSC.)

I noticed a binding map small bug ( on Ardour 5.10.0 64bit OSX bundle )
If you map the MIDI controller volume slider to a track’s fader then it all goes well at a moderate speed. But when the number of midi events gets too high then Ardour does not respond anymore. In fact, if the fader is -inf and if you move the MIDI slider very quickly (to 127), then the fader does not move. Also, moving the slider slowly (from 127) happens that the fader does not move until the slider arrives near the value of the fader (-inf).

Some plugins (=parameter binding) have a more accentuated behavior that makes the midi mapping (CC) almost unusable even at slider low speeds. The problem is always the same as described above.

With a single midi event (note on, for example) I never encountered any problems.

Regards,

cauldron: That is correct behavior if you do not have motor faders. MIDI is expected to be banked and if the fader can not assume the same value of a new channel when banked, then Ardour will not move the value until the control is close to the current value. This is to prevent having a control jump suddenly because it is set to +4 and the value is at -20 right now. That would be a very noticeable change in sound where none is intended. The distance the control can be from the current value and be recognized is set by the smoothing control. In general, level settings during mixdown or recording are made in small increments, not large or full scale movements. I can understand that in the case of quickly changing plugin parameters this may not be appropriate and in such a case smoothing should be adjusted. Another possibility if your controller supports it, is to set the control to encoder mode where a delta value is sent to Ardour rather than a parameter value where a value of up to 1/2 full scale either direction would be recognized. (+/- 63 in midi terms).

lenovens: Thanks for the explanation (I have never used these functions before).
in practice if I have to control the transient attack of a sound with the keyboard ( https://www.sonnox.com/plugin/envolution ) I have to guarantee a non-continuous velocity parameter.
If I am unable to disable Ardour’s behavior for a CC message then I noticed that a Note_On message is not subject to this behavior and I can freely jump from one value to another. In practice, with a MIDI processor, I can reduce all the keyboard keys to one and pass the Note_On velocity information to the plugin parameter.

The behaviour that lenovens described is not limited to MIDI CC.

It just doesn’t tend to show up with NoteOn/NoteOff because we consider these toggle messages as I described. If you re-read his message, he did not mention MIDI CC at all. The behaviour is a general one: if the existing parameter value is “very far” from the incoming message’s indicated value (however it might be indicated, velocity, CC value, whatever), then the parameter will not be modified. “Very far” is defined by the smoothing value.

I understand, I apologize for ignorance.I have checked the tolerance of the “Smoothing” parameter and the “Motorised” switch that disable this behavior.

In the Jack Audio context, MIDI events (through Jack Audio, -X coremidi) are not so precise temporally as in a native context (CoreMIDI). Using a buffer of 512 samples at 48kHz means that MIDI events will arrive at the next Jack Audio cycle (every 10ms). This means there is a 10ms jitter. I think this is a correct conjecture.

The ideal thing would be to use a bridge between native MIDI and OSC ( https://github.com/jstutters/MidiOSC ) within the same host (localhost). MidiOSC compile smoothly with OSX 10.11 with the advantage of adapting source code for real-time event processing.

Can I use an OSC message to do the same things I would do with a MIDI event for mapping plugin parameters?

There is no jitter. There is 10msec (1 audio cycle or buffer) of extra latency. Jitter and latency are not the same and are only barely related.

There is no need to use JACK unless you have a specific need for it. Ardour has full support for native CoreAudio and CoreMIDI.

OSC can do everything that MIDI can do, although the mechanisms and concepts are extremely different.

You’re right, Paul. Jitter is not altered (<0.6ms) with Native Instruments Complete Audio 6 MIDI USB2.0 interface. I wanted to do a time measurement of 1000 MIDI events (round trip) with an embedded real-time linux system. Each subsequent MIDI event starts after a random interval of 10 to 20 ms.

interval between measurements: 10.000 … 20.000 ms

sampling 1000 midi latency values - please wait …
latency distribution:

CoreMIDI round trip test
2.8 - 2.9 ms: 9 #
2.9 - 3.0 ms: 31 ###
3.0 - 3.1 ms: 160 ################
3.1 - 3.2 ms: 505 ##################################################
3.2 - 3.3 ms: 285 ############################
3.3 - 3.4 ms: 10 #

Jack Audio round trip test (512 samples buffer @48kHz)
24.1 - 24.2 ms: 15 #
24.2 - 24.3 ms: 59 ######
24.3 - 24.4 ms: 229 ######################
24.4 - 24.5 ms: 519 ##################################################
24.5 - 24.6 ms: 176 #################
24.6 - 24.7 ms: 2 #

Unfortunately I am forced to use Jack Audio for a large number of audio channels to use (>64) and I can not use the VSTi interface for the 32 channel limit.

The standalone Virtual Instrument receives MIDI events via CoreMIDI and uses Jack Audio audio interface. With Ardour I can not use the CoreMIDI interface if I use Jack Audio. In practice, you create a MIDI latency between the Virtual Instrument and the plugins inside Ardour. The higher the Jack Audio buffer, the higher the MIDI latency (compared to the CoreMIDI level).

With a Jack Audio buffer of 1024 samples at 48kHz (21ms) MIDI information may come too late to the plugin especially if you have to process the transient attack of a sound.

For this reason, a bridge between native MIDI and OSC allows drastically reducing latency to a value that is not dependent on the Jack Audio buffer. I do not see any other alternatives to keep MIDI latency under control.

I don’t understand "I am forced to use Jack Audio for a large number of audio channels to use (>64) ". The only thing you need JACK for is to do inter-application audio routing. Are you doing that?

Also, I don’t know what code you’re testing MIDI latency with, but … one of the dilemmas of MIDI APIs across all platforms is this: the API has to deliver the data in a particular thread (either by forcing it into that thread, or by the application requesting it from a given thread). In many scenarios, there will be a mixture of control data and music performance data in the same data streams. You want the music performance data in the audio thread where is can be used for synthesis. You want the control data in some other thread, where it can be used to drive arbitrary operations, some of which may not be RT safe. Every MIDI API makes choices, and the choice then forces work onto the application.

JACK MIDI delivers all MIDI data into the audio thread, with an extra audio buffer’s worth of latency. CoreMIDI delivers all MIDI data into a different thread, with no extra latency. In both cases, some of the data has to moved to another thread, by whatever means the application deems appropriate. This means that if you’re using JACK, you will see the extra latency as you increase the audio buffer size. If you use CoreMIDI, you won’t see the extra latency in your test, but if any of that data is destined for a synthesis engine, your application is going to have deliver it to the audio thread(s), and to avoid jitter, that mechanism will almost certainly add … another audio buffer’s worth of latency.

Yes.
My Virtual Instrument can be instanced only once and can be used as AUi/VSTi (up to 32 channels) or as standalone (up to 512 channels). My Virtual Instrument is a closed-source commercial application, I can not change it. I am now using 96 channels for internal DSP processing (within Ardour) and I am forced to use it in standalone mode. The multichannel output audio system is set up as a multichannel infrastructure on the IP network (it can be very far and segmented in various areas). This means that the Jack client will use “jackd -d netone” interface as a synchronous and driver agnostic connection to the audio infrastructure. Jack Audio also allows me to move some DSPs (crossover, pseudo-synchronous cards aggregations, upsampling, …) to the Jack server side by lightening the Mac OSX Jack client.
If I use Reaper DAW then I can use Jack Audio for audio and CoreMIDI for MIDI. I prefer using Ardour because Ardour speak natively with Jack Audio and it allows me to save a lot of CPU overload compared to Reaper who talks with Jack Audio through a driver.
The only disadvantage is that I can not use CoreMIDI with Ardour if I use Jack Audio.

Jitter also depends on hardware/software factors. For example, some USB controllers have a much lower jitter than others and usually USB2.0 is better than USB1.1 (many USB MIDI cards are USB1.1). But even the performance of the same MIDI OS-interface is completely different based on the operating system used (Windows/Mac/Linux) and sometimes a different version of the driver can create a different behavior for a not class-compliant interface. The tests described above were made with the M-Audio Audiophile 2496 PCI interface used with the alsa-midi-latency-test utility (https://github.com/koppi/alsa-midi-latency-test) in an real-time linux environment (
http://www.cauldronmidi.org/index.php/products/benchmark-midi ). You can also find many instructive comparative tests in the document http://www.cauldronmidi.org/download/benchmarkmidi01.pdf
It’s all open source.

In general, if jitter is much lower than latency then we can also neglect it. It is not bad to empirically check what’s going on and sometimes you find unexpected things.

In my particular case I could be content with a MIDI latency less than 10ms between CoreMIDI and Jack Audio MIDI. I’m almost certain that a CoreMIDI <-> OSC bridge will not increase latency more than a few milliseconds.