GPU Audio Brings GPU-Accelerated Audio Processing To Mac

GPU Audio has announced Apple Mac support for its GPU-accelerated audio processing platform:

“By far the biggest request we’ve had this year is for Apple support, and we’re pleased to announce that the day has arrived!

MacOS users now have access to the FIR Convolution Reverb, demonstrating one of the most processor-intensive plugin effects out there. The FIR Convolver is the perfect audio tool to try out GPU Audio’s unique processing technology – and best of all, it’s FREE!”

GPU Audio is a new technology company that says that they have created the world’s first novel GPU-based, full technology stack for audio processing.

They say that this release paves the way for the next generation of pro audio plugins on Apple M1 and M2 devices, with all of the benefits that brings, including Live Machine Learning and A.I. processing.

Features of GPU Audio:

  • Low-latency VST3 performance regardless of channel-count
  • Real-Time (instant) Audio processing
  • Performance gains for AI and ML algorithmic use cases
  • DSP power that is orders of magnitude greater than CPU

FIR Convolution Reverb is available now as a free download for Macs with an M1 or M2 processor.

53 thoughts on “GPU Audio Brings GPU-Accelerated Audio Processing To Mac

  1. Can’t they ‘just’ make a vst or au bridging host so we can use all the top name plugins we already have running off the GPU in our own DAW ?…..please ?

    1. That’s what I was thinking. I don’t want to use their plug-ins, I’d rather have Logic Pro, for example, and all plug-ins running on it to be GPU accelerated.

    2. That does not work this way, the plugin need to be developed for the GPU using their SDK. The internal algorithms should be rewrote to take into account the GPU, there is no “plug & play” solution.

  2. I’ve not really been keeping an eye on this but it seems interesting. I would have thought the endgame here was to make some kind of host/bridge so that the user could harness the power of their GPU to run the DAWs, VSTs and AUs that they already own

    Seems a bit wasted if it only works on their own plugins

  3. i agree with the other posts, i would only be interested if it helps other plug-ins utilize gpu, i have no interest in their plug-ins or daw.

  4. @eoin As far as I know the official story of the company involved is that they want to be the one that offers a platform for other developers to create VST’s. This could be ofcourse to create a stonewalled garden standard that competes later on with the other standards itself. Ofcourse Apple could choose to roll one of their own whenever it picks up popuarity and they have a much bigger developer muscle and reach. It would be ofcourse nice if we see an open source standard making sure that we get an GPU accelerated open standard instead. However with open source lagging so far behind on the audio domain and so few people use open software daws, issues with studio equipment not having drivers for open source platforms, such a thing seems far away.

    1. I find it a bit weird that they already have enough spare money around to sponsor sonictalk, when they’re not selling anything yet. Maybe they have VC investment, and maybe they hope that they’ll get bought by Apple’s Logic team.

  5. @sparkle
    GPU acces time over a bridge would run against a read write time bottleneck which thus still has latency, which defeats the point of having more processing power at less latency. So that is an impossibility. So a bridge simply wouldnt be effective because of laws of physics said so. However having a library that is a thin abstraction that does bare metal acces from deep into Operating System level, to which all VST developers can hookup to would be something that is effective.

    The domain involved is about to get as much direct access to the bare metal as is possible, much like GPU accelerated software is for 3D graphics in gaming. If Apple would offer a built in GPU acceleration library as for graphics, AI and now also for sound, that would mean you end up in the stone walled garden of Apple unless to gain momentum they would make a library for it open source.

    Another perhaps intersting question is if the GPU’s in their mobile offerings could be used more as an efficient means, not just for graphics and AI but also for sound. Because if it does, that might be the strongest motive for Apple to take on such an effort. And from that it probably gets into the rest of Apple’s Operating System ecology (so IOS->Mac OS)

    I am not a fan boy of any Operating System. I have a development background but also into the professional audio world. The reason why I use apple as preferred tool for audio is not just the software I use, but also that it offers better support for multiple audio interfaces running on the OS. I could name more reasons but that is a pretty basic one which sadly Microsoft hasnt nailed yet. Linux and Unix are not great operating systems as a tool for typical studio usage as they dont support enough audio devices. Although on other fronts they have improved. So it seems if we were to expect a built-in low level low latency support for GPU acceleration geared for audio Apple is the one to do so. It also is much easier for them as they only have to support a limited amount of GPU cards and from OS standpoint too have to support less non-audio devices as well. They can adapt things like this quicker as a result.

  6. For everyone saying they don’t think it’s fair that GPU Audio isn’t harnessing this tech for other DAWS r plugins, they’re going to make it open source at some point. They shared this in an interview online, that I cannot find, of course.

    1. I don’t think the general sentiment is that it’s “not fair”

      It’s more that it’s not very useful to most people if it can only be used on GPU Audio’s in-house plugins and DAW

      It’s their tech to do whatever they will with – but not allowing third party VSTs to make use of it would be a big mistake inmho

      It will only ever be a niche product without it

      1. creamware, tc, ssl, metric halo, uad, all did it with dsp (plugins echo system) and it was pretty successful, especially uad.
        thinking you will or not use their plugins without yet knowing how the final product will be don’t make much sense to me.
        and anyway if it will be successful, other will copy it, it’s not like they are the only one who can build audio plugins for gpu’s, it has been done/tried before already.

    1. most cpu’s today are powerful, m2 has a very nice single core performance but it’s not that powerful compered to intel/amd high end cpu’s (of course at the cost of more noise)
      try 4-5 channels of the bitwig grid and drum rack’s presets to see how your m2 behave like a p4 🙂

      1. dont you think that tells us more about bitwig (cough) and not much about the chip?

        I can do synthesis and convolution till the cows come home on an older a12 chip.

  7. There is no free lunch.
    Powerful GPUs very much tend to eat through electricity and need to dissipate lots of heat.
    Forget having a quiet music pc unless you have a water cooled gpu, or only use a small fraction of the GPU’s capability.

      1. Please note the first word of my second sentence: “Powerful”.
        From Tom’s Hardware:
        “The M2 GPU is rated at just 3.6 teraflops. That’s less than half as fast as the RX 6600 and RTX 3050, and also lands below AMD’s much maligned RX 6500 XT (5.8 teraflops and 144 GB/s of bandwidth). It’s not the end of the world for gaming, but we don’t expect the M2 GPU to power through 1080p at maxed out settings and 60 fps.”

    1. “There is no free lunch”
      yes there is, cost is an imaginary concept,
      or it’s called doing things smarter, this is why a brand new cpu can be 10x timers more efficient than a 10 years old cpu and the m1/m2 are supper efficient (some of amd cpu’s are too).
      and actually gpu tend to be more power efficient than cpu for many processing needs, so i guess we will wait and see?

      1. A decade of CPU development in absolutely no way can be viewed as a free lunch!
        I’d be more inclined to say that value is imaginary and that cost is much more tangible.
        Cost is much easier to put a figure on. It’s generally what you’re asked to pay in order to experience the value of your purchase (externalites are not included and artificially reduce the cost). Value is often highly inflated by dreams and advertising.

      2. This is not an imaginary concept. Fast, cheap, good… Pick two. You can only ever have two. That’s true if you’re making a CPU, a GPU, managing a project, or anything in life. You can only have two of the three. Assuming some company can come along, and just “innovate“ their way into advanced performance is nothing short of ignorant.

        This concept applied to GPU and CPU manufacturing comes down to, speed, heat, and energy consumption. If you have some thing like a Mac laptop, where you get long battery life and no heat, that means its speed throttled. Period. It’s not more “efficient“ in the sense that it’s faster as well. It’s not faster. Being more efficient means its speed throttled in order to use less energy and generate less heat. If you want the most speed you can get, you will use energy and generate heat. Physics always wins. Do you know why you have so many cores in a new Mac? It’s not because they’re more fast and efficient. It’s because their speed throttled and so you get more cores in order to do the same work you could get out of less cores that were not speed throttled.

        1. Its true cpu throttled help to manage heat and battery life but in no way it related to efficiency of audio processing. audio processing is not burst power its sustainable, well, mostly.
          all laptop cpu’s and gpu’s throttled. macbook have a long battery life because the arm is more efficient than most x86 cpu’s, the efficiency is a result of a dedicate hardware processing for a specific task, smaller transistor size that generate less heat per cycle, shorter circuits paths in one soc, highly optimised os for the specific hardware and so on…it only has 8 cpu cores. 4 performance and 4 efficient ones.
          the m2 air lost about 20% of audio performance after throttling maxed, this is still about 5800 / 11900 performance for audio, sustainable, 10 watts passively cooled against 100w-200w with x86.

          1. Please show us the tests.
            The benchmarks I’ve seen show M2, at best, using half the power of AMD’s mobile CPUs, not 10%. At times the difference is minimal. The AMD chips beat M2 on multi-threaded and AVX workloads (many audio plugins use these extensions).
            M2 comes last in the FL Studio benchmarks.

            While integration shortens paths and increases efficiencies I’m going to call ‘no free lunch’ once again for the show stopper: your ssd is soldered to your ram and cpu.
            Perhaps I’m an old traditionalist, but I’ve had more than enough solid state storage issues over the years to know that that just feels wrong.

            1. i compered the m2 to desktop cpu’s (5800/11900) to demonstrate to Xtopherm “throttling” is not the reason for the m1/m2 efficiency. they have about the same performance with daw’s but the m1/m2 use fraction of the power.
              i know there are more efficient cpu like the amd r7 6800u, please read what i wrote:
              “the m1/m2 are supper efficient (some of amd cpu’s are too)”

              the fl studio test you linked is for “export” so it’s not relevant to our plugins processing discussions.

              i’m not an apple user, i’m not a fan of the way they do some things, i use multi touch display to control my daw and like you i may like to upgrade my main ssd at some point. but to be fair you can add a fast sd card with the 14″-16″ pro versions and most windows ultrabooks have the ram soldered anyway.

              but i’m willing to admit i would love a windows laptop with this kind of cpu’s performance, they are very efficient, very quite, can be passively cooled, battery life is great and single core performance is comparable to high end desktop cpu’s (lower buffer/latency).

              1. Ok, so you’re not able to show us the tests?
                Multi-threading is a big part of both live or daw performance.
                The FL Studio test isn’t perfect but it’s definitely very useful. The audio is rendered linearly, and it uses plugins and mixing. It’s a seriously better metric than 7-zip, games, Premiere, etc.
                I’m interested in underclocked 5800(x) low latency performance.
                You say you’ve tested it? You seem to speak with authority on the subject with data?
                I just plain do not believe an M2 can equal an 5800 like you claim.
                For single core they look comparable, but for multi-core?
                BTW, isn’t M2 a 22w part?

                1. i used my own versions based on dawbench and some of my biggest projects. polyphony and dsp.
                  the ddr5 of the m1/m2 give some advantage with polyphony (or number of vsti) so it’s not just strictly cpu.

                  you taking things out of the contexts, i didn’t say multi core performance is not important, i said the m1/m2 single core performance is comparable to high end desktop cpu’s, so you may use the same buffer size offered with high end desktop cpu’s, or having better performance at high buffer size. depends on your system, daw, ram and your specific use case.

                  low latency is mostly a factor of a single core performance, but multitasking performance helps with bigger projects on lower latency so its directly connected to single core performance. to improve or understand a specific system performance it’s better to do your test on your own system.
                  again, my “claim” about the 5800x was to prove a specific point, i explained that to you already.

                  no, an export of audio from fl studio will not tell you nothing about the performance of the cpu tracking and mixing using plugins and instruments. the video you linked treats daw’s like video rendering,
                  even experience user who focus on audio producing do many mistakes when benchmarking and it’s getting harder with cross platform tests. even nick batt made a mistake testing the m1. he didn’t test it to the max.
                  so at the least try dawbench based test.

                  1. I stand wholly unconvinced.
                    You had an opportunity to explain why FL Studio is a useless metric but you didn’t give any evidence. I mean, I said it wasn’t perfect and you claim it’s completely irrelevent.
                    For it to behave like a video renderer it needs to break the render into chunks. A lot of work goes into creating each frame of a video render, which will occupy something like 1/24th of a second of the final render. It makes sense to duplicate plugins and split up the job then stitch it back together. That behaviour is entirely at odds with how an audio stream is rendered. There are no frames that can be split out and ran as separate jobs.
                    I don’t use FL Studio, I use Reaper, but I think it’s a safe assumption that rendering is handled in a similar fashion. Offline full speed renders are created linearly. It is the same process as pressing play and monitoring the whole mix except that if it can create the stream faster than realtime it does.
                    What the FL Studio (Reaper) render test does it take away the overrun analysis. At times the render could run close to realtime or even under when there is heavy dsp use. Reaper does provide an overview at render of its running speed. Perhaps FL Studio render does.
                    That realtime or close to realtime rendering speed directly relates to the project’s low latency playback. What is missing is OS overheads and how they relate to a given time slice (determined by sample buffer and sample frequency).

                    1. this is on you, you need to prove “export test” have helpful results understand “real daw work” performance comparable to a known and tried test all daw builder use for 20 years now, just to defend a “test” from someone who clearly knows nothing about daw perfomance testing. its kinda embarrassing actually.
                      like with any science test, you need to make it as close as possible to the real scenario, many things can go wrong even when it’s done right.
                      seems you making assumptions to make it right, it’s called bias, you also need to exclude that from your tests.

                    2. Dear Gadi,
                      It’s funny that you think my detailing my assumptions and methodology is unscientific.
                      It’s also funny that you choose not to engage with those but use ad hominem (‘kinda embarrassing actually’) instead.
                      And an appeal to authority (“tried test all daw builder use for 20 years now”). Dawbench is far from perfect as you well know.
                      I wouldn’t call it ‘defending’ when I explore why a render test is useful. That kind of language is incredibly unscientific and counter-productive to arriving at useful conclusions.
                      ‘from someone who clearly knows nothing ‘ this may or may not be true, but what is absolutely true is that you cannot disregard the test simply because the person carrying it out is not broadly qualified in your opinion. All that is important is that the person performed the test correctly.

                    3. “It’s funny that you think my detailing my assumptions and methodology is unscientific”

                      i didn’t said that. i said that like with any scientific test you need to make it as close as possible to the real scenario. there is no reason not to. even if in some absurd way an export test was comparable to real use what will you gain from it? It will take much longer to conduct, it will not give you any result you can based on to asset the number of plugins, instruments, channels… so many things can go wrong because of the different ways audio is processed in real time and on export mode, and it will not correct any fault of dawbench. the only thing i can think you could gain is not saying you are wrong.

                      “And an appeal to authority (“tried test all daw builder use for 20 years now”)”

                      no, i didn’t say it is true because all daw builder use it, i mentioned it so you understand what you saying contradict the way it is done till today by all, so this is not authority bias. it is the standard.

                      “Dawbench is far from perfect as you well know”

                      its still the best way to test daw system for real time performance or other way based on the same principle. for my personal use i use my on project and i multiply the channels until audio breaks. it is by definition, to me a perfect way because this is exactly what i do with it.
                      also whatever fault you will have with dawbench will just be duplicate (at best) with an export test.

                      “It’s also funny that you choose not to engage with those but use ad hominem (‘kinda embarrassing actually’) instead”
                      ?“I wouldn’t call it ‘defending’ when I explore why a render test is useful. That kind of language is incredibly unscientific and counter-productive to arriving at useful conclusions”

                      lets make it clear, im not conducting an “exploration” with you, considering you change your focus on your reply’s multiple times trying to find fault in mine (m2 efficiency, 5800x performance, this export test and so on) don’t be surprise i’m not looking to add more information for you to have fun with, it seems to me you mostly looking to argue. i simply have nothing to gain from this kind of discussion so please save your words if you going to change the subject again by finding fault in my side notes.

                      “‘from someone who clearly knows nothing ‘ this may or may not be true, but what is absolutely true is that you cannot disregard the test simply because the person carrying it out is not broadly qualified in your opinion”

                      i disregard this person capabilities to test daw performance because of this test, not the other way around. it is embarrassing to test cpu performance in daws with an export test.

                      “All that is important is that the person performed the test correctly”

                      he didn’t by the known scientific way to do that, if someone use an export test to test cpu performance of daws its shows he knows nothing about testing daws performance. simple has that. i really don’t understand how can you argue with that, again, it seems to me the only reason you try to make it right is so you will be right.

                    4. Gadi,
                      Please dispense with the ad hominem.
                      I assure you that I argue in good faith.

                      If you make a statement in a post I am allowed to question it.
                      You claim the FL Studio export is a terrible benchmark yet you have provided absolutely zero evidence.
                      I outlined why I thought it was reasonable and some reasons to take care with it.
                      You indeed made an appeal to authority.
                      Using your own projects and stress testing is a fine test. But useless for comparing online. Which is why dawbench exists.
                      It is already heavily compromised before we even start using it.
                      I certainly do not work in a way that lends itself to dawbench. Their first gen compressor test was so removed from the real world that I questioned the competence of the designer. It took seemingly years to add actual synths to the test.
                      Processing chains and signal routing play a big role in how well a project runs on multicore cpus. It’d likely be a dull project without plenty of that.

                    5. “You claim the FL Studio export is a terrible benchmark yet you have provided absolutely zero evidence”

                      i gave you many reasons why it is a bad test and there are no benefits compared to dawbench.
                      it’s you that made the statement, its your flying spaghetti monster so the proof is on you to give.

                      you again try to find fault like “appeal to authority, ad hominem” in what i wrote because you simply can’t admit you were wrong for putting this link with this embarrassing test.

  8. Following this closely. I use Ableton Live + Presonus Quantum to perform fast MIDI keytar solos and live vocal processing. A whole low-latency DAW would be amazing for taking my production into a live context as fast and easy as possible.

    1. If you can already work at 128 sample buffer then you’ve probably got the capability you need.
      64 would be nicer, sure. You can monitor your voice in Quantum’s hardware mixer at ‘zero’ latency. Relying on a GPU that is not designed for the job and adding more points of failure is a recipe for disaster (not a fast, smooth and easy workflow).
      A 3rd gen i7 probably has enough power to do what you need, or a 4th gen if you need AVX2.
      (I’ve been using using PCs live for several years for synths and vocal processing)

  9. Higher tier Intel Arc can be interesting choise for next year if they don’t use Nvidia prices…that is if they support Microsoft Windows at least.

  10. I’m a little concerned about using this with an SOC because I don’t want it to interfere with other graphics tasks – it’s one of those things that (like using CPU as opposed to DSP cards) makes a system less stable because of the variability of use. Of course we all use lots of CPU processing, but on extremely large projects or large templates for composers, things start to reach the limits of sharing computing power – if you are working with video and lots of open windows with metering and so on, this whole thing becomes extremely reliant on how a system allocated resources, and that part of this remains unproven to me. And I would never change DAW’s only to get a bit more DSP – who would except someone who’s not invested in the one they use? So maybe this product will have to crawl its way up via users who are just starting out, though they won’t be the ones to really push the tech and get it stable enough for pro use.

    CPU-independent DSP is useful, but to me it’s most useful in converters for near-real-time processing as in the Antelope interfaces. Once the path is in the computer one is subject to the usual latency concerns.

      1. Thanks for soothing my little nerves. It’s not unreasonable to posit that introducing a new demand in a system that wasn’t designed for it might be a bit bumpy. And because things need to work at peak or they don’t work in a pro situation…

        1. computers don’t work on common sense (yet) and you didn’t mentioned a real issue no matter how “reasonable” you think your assumption is. there will be many obstacle to “gpu audio” along the way but it’s better to focus on the real ones instead of conceptional ones.

  11. I understand the basics involved, but it seems like a classic solution-looking-for-a-problem situation. You might consider it if you were doing triple the track count of a Pink Floyd album or running Jarre’s entire mega-rig live, but under the M1, Logic is now near-instantaneous in use. How many tracks of exactly WHAT are you doing that would need GPU power? Orchestral work, maybe, but not the glorified-band format most of us use. I have a very rare 42 tracks going in one place and not a hiccup.

    I’m well empowered with several reverbs already, including a convolution model I like, so as with the ATMOS debate, I’ll be waiting for another couple of steps in the proof of concept lane. My gain in speed and ease of use is already excellent under the M1.

    1. Not everyone does what you do, as you said. I’m fairly skeptical of the utility of this on a mission-critical system, despite what any really very bright expert kids may say – but if it were solid and if it could be segregated from other video processes – like digital picture and animating plugins and fast screen drawing – it might be useful. Lots of convolution reverbs are a drag on any system, but using lots of them independently on lots of different stem busses with a great number of instruments is always an issue. Always workarounds, but it’s good not to have that when you are on a deadline. I’m with you on Atmos – it’s just another way to release things, a solution in search of a problem.

Leave a Reply

Your email address will not be published. Required fields are marked *