This video, via Noisebug, demonstrates a custom Moog format 5U algorithmic analog drum machine.
The drum machine features several modules from Corsynth, including their DR-01 Bass Drum, DR-02 Snare Drum and DR-03 Hi-Hat / Metal. Each module features a complete analog voice, tailored to a different type of drum synthesis.
The drum machine is driven by the FSFX 110 Topographic Drum Sequencer, a 5U adaptation of the Mutable Instruments Grids synthesizer module. Grids is an algorithmic drum sequencer, based on AI-powered machine learning trained on the drum patterns of electronic music and other genres. So, instead of building a pattern, you can select a starting point and use CV or controls to modulate the density of each part of the pattern.
The system adds a Corsynth C111 Multimode Contour Generator for envelopes and a Moon 526 Reversible Mixer in a Moon M500-T10 Case.
The Moog Format Analog Drum System is available via Noisebug for $2,250. All the components are also available individually for creating custom analog drum synthesis solutions.
18 thoughts on “Analog Drum Machine In 5U Powered By Artificial Intelligence”
What’s going on with the title? I’m quite literally not seeing a mention of “Artificial Intelligence” anywhere else.
“Grids is an algorithmic drum sequencer, based on AI-powered machine learning trained on the drum patters of electronic music and other genres.”
Its not really artificial intelligence. Theyre just trying to get you to click the link
Machine Learning is a category of AI. Grid’s drum pattern topography is generated using machine learning.
Topography as in “collective features of a region” ?
Care to elaborate?
Mutable used AI (machine learning trained on a big collection of drum patterns from electronic music) to generate a topographic map of rhythmic starting points.
The module itself can interpolate smoothly between patterns, and then for any point on the map, it has a ‘z-dimension’ for each of the voices that controls the pattern density.
So you can think of it like a 3D map of rhythmic patterns, where x and y control your place on the map and z controls how deep or complex the rhythm goes.
For me, the module is most interesting when you play with the controls a lot or even sequence the controls, because you get complex and even realistic drum patterns.
This Noisebug system contains the analytical result of AI previously obtained, that is here being used as a constrained randomizer, not any artificial intelligence itself. Putting AI into so few synthesizer modules would be difficult if at all possible, and ridiculously expensive, far more than is presented here.
So, realy a fancy name for a constrained randomizer?
Like, once upon a time, while making my own drum VST (a Pure Data project) , I came across a gigantic database of (sadly mainstream) drum patterns. For example not including not so wast amount of free jazz performances.
Thus I’m curious about the databese they use.
Not really impressive… all sounds nearly the same….
I think this demo gives a small view of the possibilities…
To me a good AI would learn from the user’s habits over time and would then self-patch somehow.
To me a good AI would wash my dishes, clean my laundry and then craft some doap beats… somehow.
Yours is inferior, my good AI would also wipe my…
There’s a palpable and consistent undercurrent of sadness in the synthtopia comment section
Or an undercurrent of having fun
But OK, let’s take your original statement
“over time”: what amount of time and when would “then” happen?
Or maybe you’d have to store a certain amount of patches first, then run the model(s)? Would you really be satisfied with that approach? Or maybe you’d end up going with the same approach they used and end up with either a similar or more varied result. Your rhythm patches after all won’t be radically different than other people’s patches especially within western music.
If it would self patch, then those patch points would be digital of course but in this case it was only about rhythm generation so I have no idea what you mean by self patch
Also where would the models run? On the module? In the cloud? On your PC/Mac?
And before I forget, would you have more fun that way?
Anyway, “AI” is a buzzword here more than anything else. You could argue things like.. How to make an AI useful.. Let’s take this use case: You have some drum loops.. your basic arrangement, you’re too lazy to program in the variety needed for it.. that could be a job for an “AI”. It’s still boring, though. Except for those developing the AI maybe, and I’m not convinced of that either.
I don’t want to put word’s in Joseph’s mouth, but it sounds like they want is a machine learning algorithm fed with samples of their “style” that could then generate things that they like, maybe something tweakable, a bit of a personal assistant you teach with your data.
Maybe you’d have a few banks of data, one being your own and the others existing in areas of music you’d like mingling with your own, and you could have the algorithm generate patches ranging from derivative of you and blended with bank b, or with deviation.
That would be pretty neat. Probably doable to some extent.
Horrible demo. I’m sure there are some moments to be had here for $2250, but I’ve heard more interesting sounds from many iPad apps. And that horrible, cheesy “snare” just gives me a headache. Sorry to those who only want positive reviews.
Any part of AI research that gets well understood enough to leave the lab and get put to use is almost by definition not really AI anymore.
“Constrained Randomizer” sounds like an EDM band. My old punk band was called “Incontinence Hotline.” Our one mini-hit was “Hold, Please.”
I’m not a target buyer here. For $2,250, I’ll be glad to stick to my DAW, hand-drumming, arpeggiators and loopers.
I do like some of what I’ve heard from DFAM and others. I’m just one of those who decided to reduce his hardware stash to software. Now I only have computer problems and 20 backups, not 100 cords to troubleshoot.