AudioKit is a new open source platform for audio synthesis, processing and analysis on iOS and OS X.
It’s evolved from the Csound computer language for audio. As CDM’s Peter Kirn notes, “what AudioKit is in effect is Csound as an audio engine, with Objective-C and Swift as the API.”
- 100+ Synthesizers and FX – Physical Models, Spectral Effects, Granular Synthesis, Effects Processing, Filters, Reverbs, and more.
- Built-in Sampler – Record audio streams, including from the microphone, into tracks you can name, recall, and process on-the-fly.
- Powerful Sequencing – Sequences are not limited to the usual notes-on-a-score, but can contain blocks of any code that can be triggered at any time.
- Full-featured Examples – The list of examples is growing, but already contains projects demonstrating audio techniques such as FM Synthesis, Granular Synthesis, Convolution, Effects Processing, and Pitch-Shifting, and more.
- Simple, Human-readable Code – Coding with audio with audio metaphors – Conductors control Orchestras, which contain Instruments that produce Notes. Clear methods with Apple-style naming conventions, Xcode completion, documentation and tool-tips.
- Write your audio-code along side your app logic – The same code that controls your data and user interface controls your sound in Objective-C or Swift.
Here’s a video intro:
Details on AudioKit are available at the project site.
If any readers are using AudioKit, let us know how you are using it and what you think of it!
via CDM’s Peter Kirn