Amazon today introduced AWS DeepComposer, described as “the world’s first machine learning-enabled musical keyboard for developers.”
While machine learning/artificial intelligence has already been used in the development of electronic instruments like the Mutable Instruments Grids drum sequencer and Roland’s new Jupiter-X keyboards, DeepComposer is designed to open the door to broader application of machine learning to music composition and performance.
While DeepComposer looks like a standard MIDI controller – it’s part of a hybrid desktop + cloud platform designed to jumpstart development of artificial intelligence-based music making. AWS DeepComposer includes tutorials, sample code, and training data that can be used to get started building generative models, without having to write any code.
Here’s what they have to say about it:
Generative AI is one of the biggest recent advancements in artificial intelligence technology because of its ability to create something new. It opens the door to an entire world of possibilities for human and computer creativity, with practical applications emerging across industries, from turning sketches into images for accelerated product development, to improving computer-aided design of complex objects.
Until now, developers interested in growing skills in this area haven’t had an easy way to get started. Developers, regardless of their background in ML or music, can get started with Generative Adversarial Networks (GANs). This Generative AI technique pits two different neural networks against each other to produce new and original digital works based on sample inputs. With AWS DeepComposer, you can train and optimize GAN models to create original music.
Here’s the official intro, featuring Amazon Web Service VP of AI Dr. Matt Wood and musician Jonathan Coulton:
AWS DeepComposer Features:
- AWS DeepComposer gives developers of all skill levels a creative way to experience machine learning.
- Keyboard – Input a melody by connecting the AWS DeepComposer keyboard to your computer. Use the hardware buttons on the keyboard to control the volume, playback, and recording flow, as well as the built-in functions to create more complex inputs.
- Console – Generate an original musical composition in seconds, using the pre-trained genre models in the AWS DeepComposer console. Choose from rock, pop, jazz, classical, or build your own custom genre.
- Publish – Share your creations by publishing your tracks to SoundCloud from the AWS DeepComposer console.
- Build Generative AI models – You can build your own custom model and music with Amazon SageMaker.
Pricing and Availability
AWS DeepComposer is coming soon for $99 USD. Details are available at the Amazon site.
Amazon continues its push to own every market segment. The video should reassure anyone who was worrying about their creativity being eclipsed by the cloud, though; it’s going to be a while before they catch up with arranger keyboards, never mind anything even slightly experimental.
I don’t consider machine generated sound to be music.
Also, don’t mind mentioning NOT wanting to see Amazon in everything.
Impressively out-of-leftfield WTF news! Thanks.
Yeah, the quality just isn’t there yet. Feeling some relief after watching the video and listening to the generated music. It sets off my suck alarm still. It’s missing the sensibility still. Sounds like a new musician who overplays constantly.
Since 20 years I hear about artificial intelligence, automatic composing etc., and the result is still the same.
Just compare it with this
https://www.synthtopia.com/content/2019/12/01/one-take-analog-synth-jam-with-jamiroquais-matt-johnson/
and sit back and smile (or laugh)….
I’ve been interested in generative and algorithmic music for 20+ years now and the pattern I keep seeing is programmers trying to use it as a shortcut to learning what sounds good and developing musical skill of their own. For every genuine experimentalist like Aphex Twin or Autechre or (add your favorite) there are hundreds of coders with no real musical taste trying to make an automatic music system they can pimp out for easy money.
Sometimes you get something that works, as in a recent videogame called ‘untitled goose game’ where the soundtrack is provided by a generative algorithm tweaked to resemble the feel of Debussy solo piano music. Besides the amusing contrast with the on-screen silliness, generative music works great in this context because an open-ended game needs music to set a mood rather than lock the player into performance against the clock – so it’s OK not to have distinct structure or direction.
The problem with the current style of machine learning (which revolves around a technique called Generalized Adversarial Networks) is that it’s very good at doing certain kinds of imitation, but only when there are a fair numbers of fixed underlying coefficients. So great for things like video face substitution, because virtually everybody has 2 eyes, a nose, a mouth and so on, and only the proportions change. But while music does have some fixed points like popular instrumentation or stereotypical drum patterns, they vary widely between genres and are too arbitrary for the way such networks are trained, which might be why so much AI music sounds good for about 15-30 seconds but then fails to go anywhere.
Anyone working in this space would do well to give up on trying to make a great composer in the computer, and instead pursue the more modest goal of an interactive session player that’s responsive to another musician’s taste: start small, try to make a drum machine with the personality of a tamagotchi.
Sounds more like an accompaniment box on an organ. Straight played to death. No feel. Even KARMA kills this AWS dog.
That’s it lads, “the singularity” has hit the music world (or nearly…).
Id prefer Band in a Box, oh yeah and they should add some autotune for singers as well.
Honestly, I’d even prefer my 30-year-old Casio keyboard for an auto accompaniment. I feel like AWS probably had a team of people working on it but then just cut the project short before it was ready. You can hear the crowd still love it and plenty of people will still buy this keyboard.
This deep learning hype should annoy people more than Behringer does.
No. This guy is more likeable.
My feeling is that 98% of modern (boring) pop music is already done by this thing (and then sent thru the same mastering plugin and eq)….
You hit the nail on the head.
Sure, it’s supposed to sound rough. It isn’t FOR actual music composers. It’s a Devkit without the benefit of owning the rights to any model you might develop. Amazon will then take their model, refined at the expense of your hard labor, and license it out to backfill crappy youtube videos and ‘ambient’ alexa channels. They take money at both ends and everyone loses.
If the example is anything like it’s going to sound, musicians are fine. This thing blows just like 90% of synthpop out there.
Deep breath….
Ok. First of all, there isn’t anything special about that “AWS” keyboard– shouldn’t need to be. It’s just an input device. Why would that input process need anything more than a bog standard MIDI controller? But the more important question is, why not make it so that anyone with a MIDI … anything… couldn’t just use that as their input device?!
As for the technology itself, it is promising. Imagine how much more interesting that demo would have been if they had included/permitted more complex harmonic and rhythmic structures. Down the road, if the sound generator isn’t constrained by equal temperament and having totally free tuning– (or even breaking free from “tuning”) we could actually hear something new and wonderful, as opposed to something that a kid could make with garageband in a couple hours.
With this demo, they were teasing to appeal to a pretty basic level of musical experience. It may be that they can push into new territory, but it is hard to say whether this system has hard limits that keep it in this general level of depth, or if it can go into that “space age design” level.
What’s really interesting is that with all of this supposed computing power studying “patterns”… it still can’t figure out how drum patterns work. A lot of the drum programming just seems random with a crash cymbal here, there and everywhere; snare drums wherever; and no regard for how something as simple as a hi-hat tends to work.
Chalk that up to many people who own drum machines never being around real drums (or bands) and what is actually sensible. With the drum machine you can play with more than 4 limbs, often with crummy results.
Most likely, most of the readers here will not have heard, “A Fifth of Beethoven.” Yes, I listened to it being played on a jukebox.
That’s what this AI application is like. It isn’t composing from what I would view as inspiration. It isn’t going to do something on its own like Vivaldi’s “The Four Seasons.” This is more like an extended version of Google’s AI composer.
Walter Murphy would be spinning in his grave
All your notes are belong to us!
Seriously, 32 note keyboard?
I love the lumping of musical genres…everything is just one of four “styles!” Anyway, I think the results of Mozarts musical dice game sound better…
https://en.wikipedia.org/wiki/Musikalisches_Würfelspiel
But what’s the fucking use of this? Not paying anymore musicians to compose bad advertisement jingles????
you know I kinda want to try rhis, mainly because I want to just punch random keys in all over the place and see what it does
The egg is hatching! I have mixed feelings about this, BUT it is quite interesting. The fact they “offer” the user input (user made database of “references”) is pretty interesting; you could make a few tracks in your own style/taste and have the A.I. generate something inside those lines. This is only the beginning (that Generic Sounding MIDI Backing Track LOL) but it will eventually become more refined, better and more accurate? I guess. If this project won’t be abandoned.
I’m sure they can do better. Still a long way to go to credibly simulate a professional human composer or musician. Sounds rather like a bunch of teenagers.
I’ve heard similar or better Garageband® work from kids in the 6-12 year old range.
In fairness, I have heard some AI examples that weren’t as laughable, but they may have been “massaged” a bit. Perhaps as the technology becomes more refined, AI and machine learning can produce captivating and worthwhile music– what then? It’ll be fun to listen to, might be useful for all sorts of music scoring needs. But what then? The implications are “uncomfortable”.
A composer might produce a masterpiece, and people might say, “Well, did (s)he cheat?” Technology continually seems to take the focus off of the question: who-deserves-the-credit? And rather focuses on the content itself, and might raise the question, “why do we like this?”
Our musical tastes are more malleable than we might admit. We can adapt to new rhythms, new scales, new chords, etc. So rather than having a machine simply guess about what we already want to hear (yesterda), It COULD bring us something truly new that we ourselves have to grow into. Composers already do this. It is our job (in a way) to challenge listeners, it’s part of our work– to bring new flavors of dissonance, new contours of tension and release.
there’s no such thing as cheating in music. if it sounds good, it is good, and how you got there is a wankers game. stop trying to make everything a competition. its just art.
Agreed but, honestly, most ‘spot’ music used in commercials, etc sound about as generic as the generated stuff here (and this is a 1.0). And if your goal isn’t to be a musician but to create a video with some music, this might be plenty. Or, if you’re a little interested/motivated in the musical part, you could pull the output of something like this into Garageband to tweak it a bit.
Consider the flip—musicians with no desire to become pro videographers have had access to super duper easy video creation tools and that’s lead to a proliferation of original music on youtube. Some are raw captures, some a pulled into something like iMovie or whatever to spruce it up a bit. Sure, the angles and lighting may kinda suck but the video ain’t the point.
Everyone forgets about Prosoniq’s neural network in the Hartmann Neuron.
If the dev kit really is accessible for creating your own models, this could be very interesting. Imagine you and a few compadres training the models with whatever inputs you like, feeding it very very simple melodies or chords and then pointing the output at something more interesting than the general midi sounds used in the demo.
Or training the “discriminator” model with entirety of the Aphex Twin catalog but then using a waltzy country music trained “generator” model.
Maybe if the “Discriminator” is replaced by a perfect copy of Frank Zappa’s mind this could get useful. For now it makes my ears hurt…
Interested but why would you need a dedicated keyboard for this? It would be preferent to offer an interface that would allow any MIDI controller to be used for that (including MPE, PolyAT, expressive E osmose etc.)
$99 is cheap … you get what you pay for I guess. Not sure Hive Mind music is the way to go. What happens to the work uploaded to Soundcloud if (when?) Soundcloud goes under?