Implementing audio waveform view and timeline view in iOS?

I am working on an application that will allow users to record from a microphone, and I am using audio devices for this purpose. I have a background backend that's up and running and I'm starting to work on views / controls, etc. I still have two things to do:

1) I will be using OpenGL ES to draw the audio input signal, there seems to be no easier way to do this for realtime drawing I will be drawing inside GLKView. After something is recorded, the user should be able to scroll back and forth and see the waveform without crashing. I know it is possible, but it is difficult to see how it can be implemented. Suppose the user scrolls, would I need to re-read the recorded audio every time and re-draw everything? I obviously don't want to keep the entire write in memory, and reading from disk is slow.

2) For scrolling, etc. the user should see the timeline and if I have an idea for 1 question, I don't know how to implement the timeline.

All the functionality that I am describing is useful as it can be seen in the Voice Memos app. Any help is always appreciated.

Here is an image that displays the question better.

+3


source to share


1 answer


I did just that. The way I did it was to create a data structure that contains different zoom level data for audio. If you are not displaying audio at a resolution that displays 1 sample by 1 pixel, you do not need each sample to be read from disk, so you do this dumping samples into a much smaller array that can be stored in memory ahead of time. A naive example is if your waveform needs to display audio at 64 samples per pixel. Let's say you have an array of 65536 stereo samples, you would average each pair of L / R samples into a positive mono value, and then average 64 positive mono values ​​into one float. Then your array of 65536 audio samples can be rendered with an array of 512 "visual samples".Implementing my real world has gotten a lot more complicated than this as I have ways to display all zoom levels and live oversampling and so on, but that's the basic idea. This is essentiallyMip map for audio.



+4


source







All Articles