New Company, GPU Audio, Wants To Turn Your Graphic Card Into A Powerful Audio DSP

GPU Audio is a new technology company that says that they have created the world’s first novel GPU-based, full technology stack for audio processing.

They say that they want to let you use the parallel processing power of graphics processing units, aka GPU cards, found in modern laptop and desktop computers, and let you use it for audio DSP (digital signal processing).

“Our mission is to make GPU Audio the next standard of audio processing, so that music and audio production can stand up to the demands of 21st-century content,” said co-founder Alexander Talashov. “GPU Audio holds the key to fast, easy, and unlimited power needed to allow audio producers and adjacent industries to participate fully in the future of content, production workflows, audio tools, software engines, and more.”

Features of GPU Audio:

  • Low-latency VST3 performance regardless of channel-count
  • Real-Time (instant) Audio processing
  • Performance gains for AI and ML algorithmic use cases
  • DSP power that is orders of magnitude greater than CPU
  • The GPU Audio ecosystem is composed of both consumer and business facing models of engagement.

For the audio producer, they plan to offer a proprietary suite of VST3G plugins, serving up a full range of standard music production tools, including spatial audio tools, developed in collaboration with Mach1.tech.

For the developer, they will offer a fully-built, modular SDK, allowing them to create custom DSP products and implementation.

As an initial step, GPU Audio has released an Early Access plugin, alongside their keynote at NVIDIA’s GTC conference in March 2022, which inaugurates standardized GPU powered VST3s for the music and audio production community.

The Early Access community is focused on benchmarking and bug-squashing, in preparation for the beta-suite release, in early summer of 2022. This plugin demonstrates proof-of-concept of GPU Audio on one of the most demanding of algorithms: FIR convolution reverb. By offloading DSP onto a computer’s local or remote GPU, it dramatically increases performance, by allowing real-time parallel audio processing, across hundreds of channels and VST3 instances, without added latency.

You can check out the Early Access Plugin at the GPU Audio site.

47 thoughts on “New Company, GPU Audio, Wants To Turn Your Graphic Card Into A Powerful Audio DSP

  1. This is a fascinating development. First, the newer Macs have lots of GPU cores that may be under-exploited by audio producers. But even for other machines, the idea of making use of GPU for processor intensive tasks seems very clever and timely. Convolution is a marvelous test case as long reverb IR’s require power. The ability to make use of that power and have it be low-latency is also quite intriguing. Granted the newer macs might not need much help in this regard, but it would raise the ceiling so a user might not even have to think about how hard they are pushing. Buffers can be lowered, plugin counts can go through the roof, and fans might come on less easily.

    1. yeah the M1s are the first thing I thought about when I saw this. They are super optimized for graphic capability since they are built for video editing. would be curious to see how this affects it.

  2. Heck yeah! Brilliant. My ROG Strix Hero III is already pretty powerful, but this would be a game changer for my machine.

    1. Welcome, Jonathan!!

      I know quite a few Mac users who are excited about this idea and are happy to see a plan for Mac development!

      I’m so glad you are working on a convolution for this. That’s brilliant!

      1. Thanks a lot! Yes and btw using an eGPU or connecting to your gaming pc via a network will work much earlier than native m1 support. So hopefully people on macs will have access to this power sooner than later 🙂

        1. M1 Macs don’t support eGPUs, and I don’t think laptop producers want their projects to rely on a noisy, heavy gaming PC tower 🙂

          It’s really awesome that you’re working on this, it’s about time someone did !

          But please make Metal support a top commercial priority : it would be gimmicky to release something that’s NVidia-only or even DirectX only given the Mac’s dominance in the audio world, and also awful timing just as M1 Macs are cheap, powerful and selling like hot cakes (even the $599 iPad has an M1 now : if I’m going to offload any processing to something, it will be a much easier sell to me than a spare PC)

          1. no – its not “gimmicky” just because its not your personal preference

            but its not surprising to see that apple people are no less entitled than usual

  3. I wish the focus here is not only on Macs but on any GPU. Those who have NVidia 20xx or 30xx would be really glad to use their truly powerful GPUs as well

    1. Hey! Don’t be mistaken – some comments are referencing macs but our solution here is actually founded on NVIDIA GPUs and soon will enable use with AMD gpus. 🙂

    1. We’ve got plenty of cool stuff in the works 🙂 But we’d also love for future partner companies (aka any companies who want to take advantage of gpu power) to build their own products as well – wouldn’t that be great to get an Omnisphere insane gpu powered synth? We can dream ??? Hit us up for collabs/biz!

  4. The developers of this tech need to contact the devs of Nebula ASAP. This might be the perfect solution for getting Volterra kernel based processing reach its maximum potential. Nebula plugins are very CPU intensive but sound awesome

  5. I can’t wait to depend on expensive video cards like 3D designers and gamers.
    Especially in this shortage period…

    1. Then keep relying on your current CPU based plugins. Nobody is making you switch. Next you will be complaining about VEP and needing a network and other PCs(when that is the feature not an issue).

      Personally, I welcome anything that will help breathe life into older production rigs.

    2. I love these comments because it gives me the opportunity to remind people that GPUs aren’t just local devices….cloud networks of these are literally running the world and that will increase exponentially in the near future. For the futurists in the room…think about those implications ;)))

      1. Even UAD is evolving with SPARK, the closed DSP technology is more than obsolete.
        If you need more power there is also something free like Audiogridder (which is basically a scalable CPU-based DSP server).
        I’ve been following the DPS thing since Creamware/Capibara/Protools/Focusrite/UAD/Acustica Audio/etc. (owned some till the end of 2000) and with all the power we have today with easy and cheap (relatively) CPUs, going back to DSP is quite insane.
        And now more not even a question of latency…

  6. I downloaded their convoluer, and in the instructions, there was a mention that the graphic card needs to be dedicated to the audio processing, and that the other software (including windows) would have to use the CPU internal GPU. It seems a bit difficult to achieve. Or maybe I understood wrongly.

    1. All it means is that in order to achieve optimal results, you’ll want the integrated gpu on your cpu (like Intels embedded gpu) to handle your screen while we offload dsp to your discreet gpu. This is very common to have both in your computer, virtually every “gaming” or production laptop has it set up this way as an option.

  7. A VST /AU wrapper that employed their DSP GPU expertise to host the EXISTING plugins we have , would be of greater benefit , I suggest .
    I am not going to buy a whole new suite of plugins to replace FabFilter , Eventide , SugarBytes etc .
    If Ableton , Bitwig etc licensed the code & built it native into their DAW , even better .

    1. Would be cool but the existing VST3 implementation on cpu has many limits and is really designed for cpu architecture – it’s not really feasible to do so. The latter however, rebuilding a DAW to utilize GPUs as an option, is totally feasible and the company is open to licensing and/or co-ventures or collaborations

    1. Hey Joseph! Our new homepage will be dropping very soon and have signups for those interested specifically in the SDK but just FYI it is coming and we are working super hard on it!

  8. Except they really aren’t. The GPUs are surprisingly gutless, the Apple marketing material is misleading to say the least. The CPUs are pretty quick though. The performance per watt is very nice, but the overall performance is “meh”. For example, an m1 pro is roughly geforce 1650 or performance, which is far from impressive.

    I’m in the middle of a fairly big editing project on an M1 Mac in Da Vinci Resolve (which has an ARM build), and while it’s OK for the size of machine, anything which really needs GPU power, like the Fusion page, is pretty disappointing on it- I tend to do that on a desktop PC.

    There’s a bit of an “ooh yeah fastezt comput0r” echo chamber around the M1 series right now, fed by misleading marketing and people really wanting it to be true, and it just is not. It’s incredibly respectable performance per watt, but honestly, that’s it.

    1. Your comment is nearly 100% nonsense.

      Anandtech, a very well respected tech site, says “The first Apple-built GPU for a Mac is significantly faster than any integrated GPU we’ve been able to get our hands on. ”

      https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested/3

      That’s Apple’s bottom of the line computer.

      The one point that you make that sort of makes sense is that a PC with a dedicated hardware GPU is going to have better graphic performance than a PC with an integrated GPU. If you’re doing paid work that maxes out your GPU all day, you’re going to be better off tricking out your computer with the best GPU you can buy, whether it’s a PC or Mac. But you need something like the GeForce RTX 3090, a $2000 hardware GPU, to really smoke the graphic performance of the M1 Ultra’s integrated graphics.

      And Apple hasn’t updated the Mac Pro lineup yet, so you can’t get an M1 Mac tricked out with the best GPU you can buy at this time.

      This obviously bodes well for the future of Apple’s lineup, though, because their current lineup is full of screaming fast computers.

      The bottom line is that GPU Audio’s tech sounds pretty cool and should be an awesome option if they get it ported over to M1 Macs.

    2. Good points man. I’m of the mind that tools need to be diversified and interconnected for the future. We believe GPUs can be a viable standard for this in many respects (for instance, in the GPU cloud-processing backend we are building for DSP)

  9. There have been so many tries in the past and people have always been hopeful. Somehow none of them has turned out working. Don’t have high hopes… And computers made fore more than 12 years have already been quite powerful for audio.

  10. Cool idea. Fingers crossed for you guys. Getting “sample accurate” processes to sync across multicore CPUs is already hard. Keeping them in sync across CPU(s) and GPU(s) will not be a trivial task. Good luck!

  11. To everyone here – from skeptic to enthusiasts of GPU power, please accept this invitation to join us for a 50 minute talk + Q&A about how we did this. We will go over how we made it happen, how it differs from any other solution (why it is novel) and how we parallelize audio effectively on the GPU. Helpful graphics included and we’ll be there as long as the Q&A goes! Free registration for NVIDIA GTC

    For developers, the tech really opens possibilities not only for having powerful vst3s without added latency, but more efficient and real-time ML/AI solutions for your future software as well. Drop in!

    https://www.nvidia.com/gtc/session-catalog/?search=audio#/session/1638568619487001je4s

  12. I love these comments because it gives me the opportunity to remind people that GPUs aren’t just local devices….cloud networks of these are literally running the world and that will increase exponentially in the near future. For the futurists in the room…think about those implications ;)))

  13. Is there any open source component to this? Specifically thinking about running this on my Linux render server. I don’t mind paying but I do mind having to install windows!

    1. hey thanks! we’ll eventually get there. At this precise moment we’re focused on direct partnerships. When the SDK drops in a couple months (hopefully with the Beta Suite) we can have more info about how we can make it available! As far as Linux goes, im def pushing for this. Lets see what we can accomplish in short/long term order

      1. Excellent, I just found your reply when clearing out old emails. Reminded me to check out the project again! Still hoping for linux support though 🙂

  14. When will we see multi-core support for regular CPU:s?
    Most DAW:s and other audio SW I used rarely use more than 4 cores on a 20 core machine.
    Wouldn’t that be a good place to start?..

  15. Unfortunately thats an issue for the makers of those DAWs and to be totally honest, I think there is a complex relationship between underutilizing those cores and the goals/trajectories of those companies to standardize their offerings and what is happening in the CPU industry. That being said, we do believe that GPUs offer this more open-source kind of easily scalable style of processing if implemented (which is what we’ve done), along with far for headroom for next-gen software. We love CPUs too but of course, we focus on GPUs. Please check the GTC talk if you’d like to watch the replay from todays address!

    https://www.nvidia.com/gtc/session-catalog/?search=audio#/session/1638568619487001je4s

Leave a Reply

Your email address will not be published. Required fields are marked *