Best way to simulate music (notes) to quickly find notes at specific times

I am working on an iOS music app (written in C ++) and my model looks something like this:

--Song
----Track
----Track
------Pattern
------Pattern
--------Note
--------Note
--------Note

      

So basically Song

has several Tracks

, a Track

can have several Patterns

and a Pattern

has several Notes.

. Each of these things is represented by a class, and besides the Song object, they are all stored inside vectors.

Each Note

has a parameter "frame"

, so I can calculate when to play the note. For example, if I have 44100 samples per second, and the frame for a particular note is 132,300, I know I need that note at the beginning of the third second.

My question is, how should I present these notes for better performance? Right now I'm thinking of storing the notes in the vector datamember of each pattern and looping over everything Tracks

Song

to watch Patterns

and then quoting Notes

to see which one has a frame that's greater than 132300 and less than 176400 (start of the 4th second).

As you can tell, many loops and songs can be up to 10 minutes long. So I'm wondering if this will be fast enough to calculate all the frames and send them to the buffer in time.

+3


source to share


4 answers


One thing you should remember is that in order to improve performance, memory consumption usually needs to increase. This is also relevant (and justified) in this case, because I believe you want to store the same data twice in different ways.

First of all, you must have this basic structure for the song:

map<Track, vector<Pattern>> tracks;

      

It maps each Track

to a vector Pattern

s. The map is fine because you are not interested in the order of the tracks.

The passage through Track

and Pattern

should be fast, as their amounts will not be high (I suppose). The main performance issue is skipping thousands of notes. This is how I suggest solving it:

First of all, for each object Pattern

, you must have vector<Note>

as the main data store. First you will write all the changes to the content Pattern

for this vector<Note>

.

vector<Note> notes;

      



And for performance reasons, you can have a second way to store notes:

map<int, vector<Notes>> measures;

      

This will map each measure (by its number) in Pattern

to the vector Note

contained in that measure. Each time the data changes in the main store notes

, you apply the same changes to the data in the measures

. You can also do this only once before playback, or even during playback in a separate stream.

Of course, you can only store notes in measures without the need to synchronize the two data sources. But it may not be so convenient to work with when you have to apply bulk operations on bundles of notes.

During replay, the following (roughly) algorithm will run before running the next measure:

  • In each track, find all the templates for which pattern->startTime <= [current playback second] <= pattern->endTime

    .
  • For each template, calculate the current number of dimensions and get the vector<Notes>

    corresponding measure from the map measures

    .
  • Now, until the next measurement starts (second?), You only need to skip the current measure notes.
+5


source


Just keep these vectors.



During playback, you can simply hold the pointer (index) in each vector for the last annotation player. To look for new annotations, you should check the next annotation in each vector without requiring any loops.

+3


source


Keep sorting your vectors and try whatever is more important and any answer you can get here.

For all your questions, you should try to answer tests and prototypes , then you will know if you have problems. And also by trying to figure it out, you will see things that you usually don't see with just one theory.

+1


source


and my model looks something like this:

Your model is missing some critical concepts:

  • Tempo.
  • Dynamics .
  • Pedal
  • Tool
  • The figure of time .
  • (optional) Tonality.
  • Effect (reverb / chorus, stepping wheel).
  • Stereo positioning.
  • Text.
  • Chord maps.
  • Composer Information / Title.

Each note has a "frame" parameter, so I can calculate when to play the note.

Your model is missing some critical concepts:

  • articulation.
  • Aftertouch.
  • Recording duration.

I would suggest taking a look at lilypond . It's a suite of software, but it's also one of the most accurate ways of presenting music in a human-readable text format.

My question is, how should I present these notes for better performance?

Put them all in std::map<Timestamp, Note>

and find the segment you want to play using lower_bound / upper_bound. Alternatively, you can binary search for them in a flat std :: vector while the data is being sorted.

If you don't want to make a "beeper", creating a music app is a lot harder than you think. I highly recommend trying another project.

0


source







All Articles