And when it comes to predicting the future of electronic music technology, it’s downright foolhardy.
We’re not going to let that keep us from peering into the crystal ball, though, and sharing our thoughts on the future of electronic music making.
10 Predictions For Electronic Music Making In The Next Decade
Here are 10 predictions for what’s ahead – not just in 2011, but for the next ten years:
- Modular synthesis goes mainstream – the price of modular synthesizers has plummeted in the last few years. A Moog modular used to cost as much as a house. Now you can get a sophisticated modular synth for a few thousand dollars. More and more synths, like the Minimoog Voyager XL, are adding semi-modular features, too. Multiple options are available for modular software synthesis and there are even modular synths for mobile devices. Now that modular synths are becoming affordable, they need to become easier to learn. Just as synths have sprouted knobs and become easier to use in the last decade, modular synthesis will become more visual and tactile in the next decade.
- Music robots join the band – 10 years ago, the idea of jamming with robots was science fiction. Since then, there’s been all kinds of innovation in the area of music robots. We’ve reported on robots that play the theremin; creepy pop singer robots; robots that improvise jazz; robot orchestras; and gamelan robots. And earlier this year, Pat Metheny replaced his band with robots. This technology is going to go mainstream in the next decade. Instead of playing with sequences, you’ll be remixing the performances of music robots playing “live”.
- Handheld music making becomes the norm – hate all those articles about iPad music software? We are really, really sorry – but handheld devices are going to be more powerful than your current desktop in three or four years, and these devices will become the norm for making electronic music. Don’t forget how skeptical people were about laptop music making 10 years ago. The iPad is going to get some serious challengers, though, which will mean lots of cheap and powerful hardware to choose from. But handheld music devices won’t just get more powerful – they’ll be more aware of their surroundings, communicating wirelessly with other devices, GPS’ing their location, viewing the world through multiple cameras and sensing how you hold them and move them. And when you take your handheld device home, it will become the brain of your home studio.
- You’ll design your own instruments – in the last few years, synth “hot rodding” has grown in popularity. You can get Roland TB-303’s with mods, keyboards with customized paint jobs and custom LEDs and end panels in the exotic woods of your choice. This is going to go mainstream in the next decade, with gear manufacturers offering you the option to order your gear completely customized. Advances in manufacturing technology are going to push this further though. In a decade, you’ll design your own instruments, you’ll test them out virtually and they will be “printed” to your specifications.
- Cloud-based music-making will get real – cloud-based music-making, the idea that you’ll make music in a virtual studio hosted somewhere online, will take off. Current cloud-based virtual studios are about where ReBirth was ten years ago. Project this forward a decade, though, and you’ll be making music in a 3D virtual studio. You’ll rearrange your virtual studio to meet the way you work. Your home studio will be made up of real instruments, flexible controllers and giant touchscreens. You’ll be able to take your studio with you, though, because you’ll be able to access your tracks and racks of virtual gear from wireless devices and any computer that you can log on to.
- You’ll see musicians getting electronic music body modifications – the next decade will bring body modifications that let you turn your flesh into a musical instrument. No flute jokes – we are talking about tattoos that can be used as music controllers; embedded sensors that sense your body movements and transmit them wirelessly; and electronics that are so tiny that they can be fused with your body. You’ll control synths with your mind and your body will become a synthesizer.
- Everything will be a musical instrument – gestural synthesis and tangible music controller technologies are going to move from being interesting toys to being powerful tools. You’ve seen the Reactable and you’ve seen tangible sequencers and what musicians are doing with the Kinect. In the next decade, your computer will use its camera and microphone to “understand” what you’re doing with your hands and body. Your computer will see you do things like draw a keyboard and then let you use that picture as a controller to play a virtual instrument. Take a chessboard and use it as a grid sequencer. Wave your hands around in the air and you’ll be playing a virtual theremin. Dig a bit deeper and you’ll use the motion of dancers as modulators for synthesizing music. Put a camera on the club floor and the density of dancers can be a modulator for controlling your DJ software. Everything will become an instrument that you can make part of your rig.
- Music software will get smarter – the state of the art in digital audio workstations is amazing. But, by and large, DAW manufacturers are still making virtual versions of traditional hardware studios. Most soft synths still look and act like their hardware predecessors, and that’s what buyers are demanding. At this point, imitating traditional studios is horseless carriage thinking – letting what we can imagine be defined by the past. In the next decade, music software is going to get smarter and interfaces will make bolder leaps. You’ll tell your computer that you want to make an drum and bass track and your DAW will anticipate the way you’ll want your virtual studio configured. Ready get started? Say “gimme a beat!” You’ll interact with your DAW to “evolve” new sounds. You’ll hum the bassline and your DAW will notate it. You’ll build the track by saying that you want a 32 measure intro and a drop down to the bass and then bring the kick back in after 16 measures. You’ll draw a curve on a timeline to define the shape of your track, do a run through and improvise over the rhythm track. Then you’ll tell your DAW to add a middle eight and double the bassline and to master it with more “zazz” and it will be saved in the cloud for your fans to listen to.
- You’ll have to rethink everything you know about the music industry – The major labels are losing their role as gatekeepers and this has been replaced with chaos. The album is increasingly obsolete. The Internet means you can connect with fans on the other side of the world, in real-time. YouTube has replaced MTV. Your ability to find fans, get gigs and get paid for making music is going to depend on your ability to rethink everything you know about the music industry, in the context of today’s connected world. If you’re making niche electronic music, you may need to find your audience in another city or another country. Or you may find that you can have a thousand dedicated fans diffused around the globe, but struggle to get a dozen to show up at a local gig. If you want to have an audience, understanding how you can collaborate with musicians and visual artists located around the world and how you can publish your work to the Web will become essential.
- Music will become intelligent – The idea of selling fixed music will become archaic in the next decade. Why should everybody that hears your music hear the exact same thing, forever, regardless of their sound system, regardless of their location, regardless of what they are doing and regardless of who they are? Music is going to become more malleable. It will adapt to its surroundings and what the listener is doing. We’re already starting to see this with “reactive music” – like the free Inception music app, which adapts to the listener’s location and environment and actions. Apps like BT’s Sonifi let the listener take an active role, interactively remixing singles with movements and gestures. The new decade will bring one-to-one music making. Instead of making one track for everyone, you’ll create unique musical experiences for each person that listens to your music. You’ll make music that incorporates feedback from the user, that can react to their motion, and that adapts itself to blend seamlessly with the previous song the user listened to. You’ll make music that listens to its environment and updates its density to create a customized soundtrack to listeners’ lives. Music will have intelligent decision making embedded into it, giving it a life of its own.
You’ve got our predictions. Now, leave a comment and let us know what you think of them.
And if you can peer into that crystal ball yourself, let us know what your predictions are for making electronic music in 2011 and the next decade!