Precise delays between notes when synthesizing a song

I am writing C ++ code that plays both digital sound (synthesized music) and MIDI music (using the RtMidi library) at the same time. The digitized music will play from a computer audio device, but the MIDI music can be played from an external synthesizer. I want to play a song that uses both digitized instruments and MIDI instruments, and I'm not sure what is the best way to sync these two audio streams:

  • It is not possible to use a function like Sleep () as the latency time is uneven and too long for my needs (on the order of 1 millisecond). If Sleep () regularly waits for 5ms when only 1ms has been requested, the resulting song tempo will be off, and if it is not accurate every time it is called, the tempo will be uneven.
  • Counting the number of samples placed in the sound buffer provides ultra-precise synchronization between notes for digital audio (minimum delay of one sample is 0.02 ms at 48 kHz), but this time cannot be used for MIDI. Since the sound is buffered, the notes are synthesized in batches (filling one sound buffer at a time, as quickly as possible), so this results in a bunch of MIDI notes playing with no delay in between each time the digital sound buffer needs to be refilled.
  • Live MIDI playback has no time information, so the note is played back immediately after it is sent. Therefore, the note cannot be scheduled later, so I need to accurately send the MIDI events at the right time.

I am currently using nanosleep () - which only works under Linux, not Windows - to wait for the correct time between notes. This allows both digital audio and MIDI data to be synchronized, however nanosleep () is not very consistent, so the resulting tempo is very uneven.

Can anyone think of a way to keep accurate time between notes for both digital audio and MIDI data?

+1


source to share


3 answers


If you want to use Boost, it has CPU precision timers . If not, then Windows has features QueryPerformanceCounter

and QueryPerformanceFrequency

that can be used for CPU-based synchronization, which will surely suit all your needs. There are many implementations of timer classes on the Internet, some of which work on both windows and * ix systems.



+2


source


The first problem is that you need to know how much audio has gone through the audio device. If your latency is low enough, you can guess from the amount of data you pushed through, but the latency between that and playback is a moving target, so you should try to get this information from audio hardware. This information is available, so use it because the "jitter" you get from errors in the latency measurement can effectively sync the music.

If you need to use sleep for synchronization, there are two problems that will make it sleep longer: 1. priority (if another process / thread has a higher priority, it will run if the timer expires) and 2. system (if the system takes 5 milliseconds to exchange processes / threads, it can add that to your requested latency). Such delays are musically significant. Most midi APIs have a "sequencer" api that allows you to preload data into a queue to avoid using system timers.



You may find this document helpful even if you are not using portaudio for audio I / O.

http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf

+1


source


The answer to this lies not in small buffers, but in large ones.

Let's take a 3 minute song as an example.

One of the first to display the digital part and "tag" it with MIDI notes. It then starts playing it back and fires MIDI notes when it's time, possibly using a std :: vector to keep the list in order. The timing can be changed using a general time offset:

AWESOME incomplete but hopefully demonstrative pseudocode on the topic:

start_digital_playing_thread();
int midi_time_sync = 10; // ms
if (time >= (midi_note[50]->time + midi_time_sync)) // play note

      

0


source







All Articles