What structure should I use to play audio file (WAV, MP3, AIFF) on iOS with low latency?

iOS has various audio formats from a higher level that allows you to simply play the specified file, down to a lower level that allows you to get the raw PCM data and everything in between. For our application, we just need to play external files (WAV, AIFF, MP3), but we need to do this in response to a button press, and we want this latency to be as low as possible. (This is for queues in live productions.)

Now AVAudioPlayer and such work for playing simple file resources (via their URL), but its latency when starting sound is too high. With large files over five minutes in length, the delay for triggering audio can be over a second, which does all of this, but useless for real-time synchronization.

Now I know that things like openAL can be used for very low latency playback, but then you're at your waist in sound buffers, sound sources, listeners, etc.

However, does anyone know of any frameworks that work at a higher level (ie play "MyBeddingTrack.mp3") with very low latency? Pre-buffering is fine. It's just that the trigger needs to be fast.

Bonus if we can do things like set the play start and end points in the file, or change the volume, or even duck, etc.

+3


source to share


7 replies


The following SO question contains working code that plays a file using audio devices and in particular AudioFilePlayer. Even though the question says it doesn't work, it worked out of the box for me - add only AUGraphStart(_graph)

at the end.

The ScheduledFilePrime property for the AudioFilePlayer specifies how much of the file is loaded before starting to play. You can play with this.



But as others have pointed out, audio components have a steep learning curve.

+1


source


The lowest latency you can get is audio devices, RemoteIO.

Remote I / O unit

A remote I / O module (subtype kAudioUnitSubType_RemoteIO) connects to hardware devices for input, output, or simultaneous input and output. Use it for playback, recording, or low latency simultaneous input and output where echo cancellation is not required.



Take a look at the following tutorials:

http://atastypixel.com/blog/using-remoteio-audio-unit/

http://atastypixel.com/blog/playing-audio-in-time-using-remote-io/

+2


source


While the Audio Queue structure is relatively easy to use, it does a lot of DSP loading behind the scenes (i.e. if you ship it with VBR / compressed audio), it will automatically convert it to PCM before playing it on the speaker .. it also handles a lot of threading issues for the end user opaquely) .. It's good news when someone is making a small non-real-time application.

You mentioned that you need this for queues in live productions. I'm not sure if this means your application is running in real time .. because if it is ... then Audio Queue will struggle to meet your needs. A good article to read about this is Ross Bensina . The fix is ​​that you can't let third-party frameworks or libraries do something that could be potentially expensive behind the scenes, like blocking threads or splitting or freeing, etc. Etc., which is too expensive and risky for real-time application development.

What happens to the base of audio devices. The audio queue is actually built on top of the Audio Unit structure (it automates many of them). But audio devices bring you as close to metal as possible, as it does with iOS.It's as responsive as you want and it's easy to make an app in real time. The Audio Unit has a huge learning curve. There are open source wrappers around it that simplify it (see novocaine ).

If I were you, I would at least slip through Learning Core Audio .. this is the go-to book for any iOS core - audio developer. It details audio queues, audio units, etc. and has great code examples .

From my own experience. I was working on a live audio application that had some intense audio requirements. I found the audio queue structure and thought it was too good to be true .. my app worked when I prototyped it with light constraints .. but it just choked on stress testing it .. that when I had to dive deep into audio devices, change architecture, etc. etc. (it was ugly). my advice: work with an audio queue, at least as an introduction to audio components .. stick with it if it suits your needs, but then don't be afraid to use audio devices if it becomes clear that the audio queue no longer meets your application's requirements.

+2


source


You need a sound environment system. The system sound environment is designed for things like using interface sounds or fast, sensitive sounds. Take a look here .

+1


source


AVAudioPlayer

has a method prepareToPlay

to preload its audio buffers. This can significantly speed up response times.

+1


source


I faced the same problem as you, but after some research I found a great structure. I am currently using the kstenerud ObjectAL sound framework. It is based on OpenAL and is well documented. You can play background music and sound effect with multiple layers.

Here is the github project https://github.com/kstenerud/ObjectAL-for-iPhone Here is the website http://kstenerud.github.com/ObjectAL-for-iPhone/index.html

+1


source


+1


source







All Articles