How to use Core Audio in Swift

I am starting to use Swift for my new audio app and I need to use the Apple Core Audio library.

To be honest, Swift is a great language and I love it, but once we have to use C, pointers and other things, it gets really annoying to use it.

To make it clear and understandable, I would like to know your thoughts on interacting with Core Audio and Swift.

I thought to use C ++ (for convenience std :: vector and more) or C, but both require using a bridge.

So my questions are:

  • Are you using pure Swift or C / C ++ Bridge when using Core Audio?
  • Which one will be faster?
+3


source to share


1 answer


I think I found the answers, so I'll leave it here in case anyone is interested.

  • Bridge is preferred. As invalidname (Chris Adamson) said in Discussion on Media Frames , you have to present to Caesar what is Caesar and to God what is God, i.e. use C for API C and Swift for fast things.
  • Speaking of efficiency, I found an article that discusses this. The conclusion is that there is no problem for a primitive type, doing all the type conversions, calling the C function, and doing the reverse type conversion. But for some types, such as String / char *, struct and more complex type, you could degrade performance.


Btw feel free to add more stuff if you think it might help other people.

+1


source







All Articles