Ableton, Native Instruments & Others Join MMA To Help Shape The Future Of MIDI

The MIDI Manufacturers Association (MMA) has announced that Ableton, Aodyo, Audio Modeling, Art+Logic, Jammy Guitar, Melodics, MIND Music Labs, Native Instruments, OnSong, and TouchKeys have joined the MMA to collaborate with other hardware and software developers on extending the power of MIDI technology.

The MMA says that the new members have joined to help shape a ‘major update of MIDI’:

The planned update to the MIDI specification will support new levels of musical expression and make electronic instruments easier to configure and use.

Standardized in 1983, MIDI 1.0 has adapted over the years to support all operating systems and communication protocols, but its core specifications have stayed the same. This initiative updates MIDI with in-demand options: auto-configuration, new DAW/Web integrations, extended resolution, increased expressiveness, and tighter timing — all while maintaining a high priority on backward compatibility.

This major update of MIDI will support the development of a new generation of interconnected devices and preserve the relevance of existing MIDI 1.0 devices.

“Our goal is to assemble a diverse group of companies of different sizes and business models and help them come to consensus on how to make their products interoperable using MIDI technology,” explained Tom White, President of the MMA.

No timetable has been announced for the major MIDI update.

58 thoughts on “Ableton, Native Instruments & Others Join MMA To Help Shape The Future Of MIDI

  1. Aarrrghh!! In the name of humanity no more announcements about a consortium of companies “collaborating to discuss new possibilities” or “joining the MMA to extend the capabilities” or “reaching a milestone”!! We’ve been hearing about collaborating and extending and milestones for decades!

    But wait… this time they’re serious, right?

    1. Yes they are. There is a lot work happening in the background but it takes time – lots of time to get it right (hopefully) and usable… and most importantly that everyone is on board. You don’t want another standard that 3 companies implement and everyone else ignores 🙂

        1. I guess I’m referring to things in the past. I’m currently working on specs and it’s important to me that what I put forward is something that would get implemented. That means lots and lots of feedback and demos and talking ?

          1. Interesting that you’d think of historical cases. We could learn a lot from those.

            One could argue that MPE almost fits that description (though more than three manufacturers have implemented it). For all intents and purposes, the main corporations named in this piece have ignored it. ROLI’s Noise does have soundpacks from Audio Modeling which can take advantage of MPE controllers, but only in monophonic mode.
            This announcement specifically includes a comment about expressiveness. From the point of view of MPE implementers, these other companies are an obstacle to the adoption of an official standard which does focus on expressiveness.

            Arguably, MPE is more of a stopgap/transition than anything else. It’s a bit of a kludge based on the affordances of the 1983 standard. It’s quite possible that some of these manufacturers perceive MPE as a deadend and want to put all their efforts behind a full reimplementation. That would be a “political” stance. We may argue on the technical merits of competing standards but a lot of adoption is blocked by politicking. In the past, people have mentioned Japanese manufacturers as most likely to veto standards change based on their interests. The manufacturers mentioned in this piece are among the ones which erect barriers to MPE adoption. (They’re also among the ones which could do a lot more to adopt MIDI-BLE and improve over it. Even Yamaha has Bluetooth dongles! Come on, gang! Go with the flow!)

            The situation reminds me of HTML5. It was a de facto standard before it became an official one. It was originally implemented by a few companies, mostly browser developers. Those who eschewed it may have done so for technical reasons. Yet adoption increased as political barriers were lifted. Now, it’s even supported by Adobe!

            If this announcement had been about a clear roadmap to adopt MPE first and then move to an improved standard (through MIDI-CI?), that would have been a real gamechanger. Imagine the Sylphyo, Maschine, and Push controllers sending polyphonic expression in different dimensions that the SWAM Engine could leverage. Then, think about the possibilities of devices negotiating with one another to optimize the pairing of controls with functions. Afterwards, manufacturers could implement really cool stuff without having to wait for everyone to be on board.

            In backrooms, there might even be negotiations between the (no-cost) MIDI Association and the MMA. It’s hard for us, mere mortals, to understand what’s going on there. But it doesn’t sound like the main hurdles are technical.

            So, if we’re trying to avoid repeating past mistakes, maybe it’s less about creating an ideal standard which would magically gain every manufacturer’s approval. Maybe it’s more about agreeing to incrementally implement existing standards.

            MPE right now is a whole lot more than demos and talking. It’s supported by most DAWs except for Live. Even Max, developed by Ableton-owned Cycling ‘74 does well with MPE. Several softsynths leverage it as do a few hardware ones. It’s been working in full production environments. It doesn’t solve most of MIDI’s problems but it’s a major “quality of life” improvement.
            And it’s become a key part of my checklist when buying software or hardware.

            1. MPE, while officially an MMA standard, really is a bit of a hack. But I’m actually a bit surprised how widely it is supported. I’m not surprised that some companies have not adopted it. MPE is only suited to instruments that have complex per-note control. A traditional synthesizer key does not have the 3D control that a key on a Roli Seaboard offers. There is little that traditional keyboards could do to make use of MPE. The market for these alternative type controllers is really pretty small compared to the number of more traditional MIDI instruments sold.
              That said, I think the MIDI industry recognizes the interest in increasing musical expression possibilities and this update to MIDI should address that interest and do so in a way that is not so much of a hack. This update work was in progress before MPE came along so it seems to me that some manufacturers looked at the limited MPE market chose to wait.

              1. Agreed that MPE’s a hack, a kludge, a transition measure, a stopgap solution, the latest attempt at an age-old problem, a bit of an edge case, hard to understand from the perspective of what already exists…
                Thing is, it’s been gaining traction and, from a systems/design thinking perspective, traction is key to the change process. More important to consider in the long term than the technicalities of MPE itself (including the fact that several software devs have implemented it poorly). Does MPE make sense for Sylphyo, Push, Maschine? Maybe not. Do these controllers make as much sense in a MIDI-CI world as they did in the piano-centric model of MIDI 1.0? Maybe not.

                Not only is it dangerous for incumbents to wait for the market to “figure it out” but it’s also representative of a specific corporate attitude which isn’t very adaptive in the current dynamics of the digital technology world. Rather surprising from a startup like Aodyo focused on a single core product. Much less surprising from a large company like NI or Ableton with a wide and somewhat confusing array of products and services.
                The fact that these companies don’t move much in the MPE space isn’t spiteful. But their “apathy” can be detrimental to the whole scene in the medium term.

                Maybe MPE isn’t the best angle to discuss this, then. If people think too much about the specifics, they may miss the bigger picture (which does include MIDI-CI). MIDI-BLE might be better. Apart from desktop Linux, OS support is widespread enough that people don’t typically wonder whether or not it’ll work. In my experience, it’s already more efficient than other Bluetooth standards in terms of pairing, maintaining connection, latency, and overall reliability. Of course, it can improve. The number of dedicated Bluetooth MIDI controllers on the market isn’t large but general purpose devices can become MIDI-BLE controllers. Given the number of iOS apps which support MIDI-BLE, it sounds like it might be trivial to implement from the software side (and desktop apps don’t have to worry about it if the OS supports the connection). No idea how big the potential market is, but it’s not tiny. It might actually be a deadend, especially if Bluetooth itself doesn’t improve much in terms of latency. But it’s surprisingly usable now.
                It was very surprising to me that Aodyo would exclusively choose another wireless option. Sounds to me like an engineering decision, not a systems design one. Was MIDI-BLE unable to support the datastream sent by the Sylphyo prototypes? Quite possible. If so, how come ROLI’s able to work it out with MPE on the Lightpad and Seaboard Blocks when a monophonic device couldn’t? Can it be because the Sylphyo is overengineered? As an Eigenharp Pico owner, can’t help but imagine a scenario in which a company like Berglund Instruments, Eigenlabs, or Aodyo partners with ROLI to create some kind of hyperexpressive breath-savvy polyphonic controller which can drive an iOS softsynth wirelessly and “donglelessly”. ROLI’s full of faults and it’s important to remain critical. But people are a bit too quick to dismiss their tiny controllers to notice the feats behind them.

                1. @enkerli just a clarification: SWAM instruments are monophonic by design, because they model a single “solo” instrument, not a section or an ensemble of instruments. It has nothing to do with limited support of MPE. MPE does not “forces” the generator to be polyphonic.
                  That Said, I can assure that all the MMA members are really working hard to provide standards that are widely adoptable and immediately implementable by the music industry.

                  Emanuele Parravicini, CTO at Audio Modeling

            2. MPE is also a bit of a pain to work with 🙂 From my view MPE works really well with Software based Synths – it doesn’t work as easily with hardware. That said Camelot can turn a Montage into an MPE device albeit still relying on software to do so.

              You bring up BLE – I have BLE dongles and they work ok… but not if you flood message down them… and latency is not great… That’s not to bag out on it – because I use them etc, but it has it’s limitations. Considering there is enough complaints about latency and timing in these messages 😛

              >> In backrooms, there might even be negotiations between the (no-cost) MIDI Association and the MMA. It’s hard for us, mere mortals, to understand what’s going on there. But it doesn’t sound like the main hurdles are technical.

              You know it’s funny when I think about this. The MMA, I believe, is very hesitant (especially in the past) to talk about anything that is upcoming, or show half finished stuff in the fear that it will get implemented and that will be broken in final release (I think this is mentioned in the historical article on the midi.org website). Then when they do make announcements everyone gets so passionate about MIDI (and it’s faults) that the responses get a little crazy 🙂

              There has been real strides in fixing this in the MMA with the public facing Midi Association and the announcements around MIDI-CI. However I think there is a lot more that could be done as far community engagement and improvements to the midi.org website.

              1. Sounds like we’re in agreement about the broad-level points. Our perspectives may differ, but we’re actually talking about the same things. Which is refreshing.
                The MMA/MA situation is indeed funny. The MMA, to me, is like those industrial-era standards (including the ISO 9k range) which involved a lot of backroom negotiations and rarely gained input from a wide base. To me, the launch of the MIDI Association was a sign of the times: increased transparency, openness, inclusiveness, accountability, adaptability, timeliness. Not that it’s better in the abstract. It’s just more fitting in the current landscape (which includes much more than electronic devices or software for musicking). The JUCE conf panel during which it was discussed was fascinating for the same reasons.

                As for MIDI-BLE and MPE’s problems, they’re not as bad if the new mindframe is about quicker iteration and increased public feedback. We’ll always have limitations and there’s a huge amount of talk of their importance in creative endeavours. But painpoints are more likely to shift among people who agree to move forward before waiting for an optimal standard.

                Latency is a really interesting one, for me. Many people are quick to raise it as a “gate”. If it reaches a certain (undisclosed) threshold, it’s a complete dealbreaker. Anything below that threshold is necessarily better.
                Thing is, in live situations at least, latency is specifically a subjective issue: it’s about a subject’s perception. And there’s almost always some latency, even with acoustic instruments a few metres apart.
                Perhaps luckily, my own latency threshold sounds like it might be very high. Maybe my brain hasn’t been “spoiled” by the quest for low latency. Maybe it just adapts quickly to situations in which the feedback loop shifts. Whatever the reason, it hasn’t been a problem for me to, say, use a MIDI-BLE dongle to drive the Moog Model D softsynth on an iPad Pro from a wind controller. Listened to some recordings and, though my playing wasn’t so good, it’s obvious to me that latency wasn’t the problem.

                Decreasing latency might be a side-effect of other technological changes, for instance in the way Bluetooth signals are processed. But it might be a dangerous strategy to wait for latency to decrease before adopting a certain technology. As we say in French, “better is the enemy of good”.

                1. “To me, the launch of the MIDI Association was a sign of the times: increased transparency, openness, inclusiveness, accountability, adaptability, timeliness.”

                  Thanks for the nice comment. We are trying, but there is always resistance to change in specification organizations If you know about the history of MIDI, these tensions between closed specification development and open input from end users go all the way back to the International MIDI Association (IMA) and the MMA. But the good news is that big companies ( Yamaha/Steinberg, Roland, Korg, ROLI, Native Instruments, Ableton) and smaller more entrepreneurial companies are all involved.

                  Regarding latency, don’t forget jitter. Sometimes jitter ( the timing changing form note to note) is worse. But we are looking at everything we can and the new MIDI-CI paradigm makes a lot of things possible in the future.

              2. “However I think there is a lot more that could be done as far community engagement and improvements to the midi.org website.”

                Contact us directly at [email protected]. We would love to hear your ideas about how to increase community engagement and to improve the site.

                We really think of The MIDI Association as a community so join as a member and we’ll make you an author so you can write articles.

                1. >> “However I think there is a lot more that could be done as far community engagement and improvements to the midi.org website.”

                  >> Contact us directly at [email protected]. We would love to hear your ideas about how to increase community engagement and to improve the site.

                  Yep can do and just to be clear I’m not putting down what the Midi Association is doing (frankly it is doing an awesome job) I’m being impatient 😛 – I’m sorry if I was not clear – and it is this kind of contact where The Midi Association is reaching out directly that I believe is immensely important.

      1. Are you trolling or serious? 7-bit CC resolution and a data transmission rate of only 3125 bytes/second come to mind instantly.

        1. Not trolling. MIDI has its limitations for sure, although NRPN and MIDI over Ethernet solve the two problems you mention.

          Yes MIDI could use a refresh but it’s not the travesty the OP suggests.

          1. “Solve” lol
            Let’s not add how unstable and variable midi clock sync is from device to device, port and channel number limitations, MPE not being a thing (despite being qualified as a “standard”). The biggest thing to happen to midi since it’s inception is USB and that’s nothing to write home about. It can definitely do with more than a refresh

            1. Lack of clock stability is not MIDI’s fault. Some devices just implement it poorly.

              I’m still struggling to understand the value of MPE when we already have mono mode, which has been used by guitar synths since the Roland GR-700. Still, it is a small and very vocal minority who are calling for it.

              USB MIDI in my experience blows. Much better to go with a driverless communication means like straight serial (DIN) or Ethernet.

            2. Midi clocks sync problems are purely device specific issues and have absolutely nothing to do with MIDI. The implementers just can’t seem to program a halfway decent PLL that’s required for any sync.

        2. The first 32 MIDI CC are 14bits. RPN and NRPN are 14bits. MIDI itself is not at fault when manufacturers choose to only implement 7 bits. But this new “major update to MIDI” should make it a default for devices to implement higher resolution. The relatively slow rate of MIDI transmission really only applies to 5pinDIN. I guess that over 90% of all MIDI traffic these days travels over USB or inside a computer between applications or between a DAW and plugins.

      2. first one that comes to mind is when you slowly move a parameter and get the jaggies or stairstepping or aliasing or whatever those darn kids are calling it these days

  2. My knee-jerk reaction is that NI and Ableton are both deep into “today’s workflow” which emphasizes realtime loop manipulation, grid-based sequencing, and a somewhat narrow range of genres.

    MIDI serves a MUCH LARGER community of musical and non-musical artists. My hope is that they add comprehensive features that include past, present and future music making, as well as other things that could be controlled with this platform.

    As they add bandwidth, I hope we’ll see 16 bit 48K controller bit-depth and scan rates respectively. As we have attack & release velocity, have other attack and release values (that can be determined by the sending hardware). Have multiple per-note controls (like PolyAT, but more than one), without sacrificing multi-timbral operation. How about 128 MIDI channels per port?

    I’d like to see a standardized per note note tuning specification. So you could transmit a request for a particular tuning map, or alternatively, use MIDI itself to regulate the tuning of each note, for alternate scales, etc., I don’t know how a new MIDI spec could increase options for rhythms, but it would be nice to keep that in mind.

    If they do it right, we could use this new spec for the next 50 years.

    1. I think you may have a limited understanding of how people are using Live. It’s one of the main platforms for experimental music. I mean it was built in Max originally.

      1. You’re right, AnalOG. I’m not very knowledgable about Live. I shouldn’t have commented with such certainty. Since experimental music is pretty fascinating to me, I should probably check it out. I’m just not into looping.

      2. Don’t know that much about the range of uses for Live either, but the counterargument leaves me unconvinced.

        Max itself remains one of the main platforms for experimental music, even after Ableton added M4L… and bought C74.
        Sonic Pi, Eurorack, Pure Data, Arduino, VCV’s Rack, Processing, Raspberry Pi, and so many other things are also doing a lot for musical experimentation.

        Live might be leveraged by experimental musickers but it’s hard for me to find a key feature of Live 10 which expanded its experimental affordances from 9. Ableton is supporting forward-looking musicking, especially through Loop and Link. But it’s hard to disagree with the core point about Push/Live revolving around “today’s workflow”.

      1. I stand corrected. I had forgotten. I don’t think I’ve ever seen it implemented.. or, more honestly, I’ve just never tried it. I wonder if my Kurzweil K2xxx series instruments respond to it? One way to find out.

  3. I would just love to see a midi only sequencer.No audio .Just like Cubase was on the Atari
    I have had about 25 12”s released over the years and I make tracks as I did back when Ataris where new.A lot of us hardware users need a reliable midi sequencer minus the bloatware and softsynths.

    1. Can still rock lots of old sequencers on computers. Just might take a little sweat to get the older computer set up. I actually did this with DOS and Voyetra but was sadly somewhat crippled by a lack of UNDO. :/ Grown toooooo used to it.

      For straight hardware, there’s the Squarp Pyramid, Cirklon, Social Entropy Engine… We’re flush with them.

  4. @stub – i’m not a midi spec expert, but midi already has attack and release values: note on and note off velocity. Most synths tend to use note-on velocity for the magnitude of the envelope, rather than the rate of the envelope, and ignore note-off velocity – but that’s not the midi standard’s fault.

    1. I did mention that we have attach & release velocity.

      I’m talking about having multiple types of control events for starting a note, and multiple streams of data during the sustain of each note (like Poly AT 1, Poly AT 2, etc.) and multiple values for the release of a note.

      The sending device, which might be a next-gen alternate controller, could generate high-res velocity, pluse several other attack values based on finger position, or force, or some other gesture, or it could just be a pre-programmed value from a keymap. Likewise, have 3 or 4 streams of controls for per-note realtime control during the sustain, and 3 or 4 release control values.

      It doesn’t matter what kind of gesture generates the value. It’s just a tag for the MIDI spec.

    1. This is not a power play; I assume these companies are joining MMA to be part of the industry’s move to support MIDI-CI. MIDI-CI is a foundational component of this new update to MIDI but it does nothing on its own. The specifications built on top of MIDI-CI are still yet to be released. These new MMA members will be joining the effort to move ahead with this major expansion to MIDI.

      1. Agreed on MIDI-CI but time will tell whether or not it was a political move. Given the lack of even lipservice to recent changes in the MIDI world, let’s just say that the optics are bad.

  5. The absolute power of MIDI is it’s backward compability. Connecting newer stuff in a better way is no problem, however – doing that without loosing MIDI’s superpower is not that straightforward.

    1. Not really, it will pretty much use two standards. The New midi protocol supports 2 way communication. And if the device doesn’t talk back, it will fall back to the traditional midi standard. So the new midi specs don’t have to be backward compatible, since they will only be used when the other product support it.

  6. this is great news people joining together to help push and create new standards for an important platform.
    imaging if the governments of the world actually did this with everyone in mind and not themselves. oh they call that globalism. were fucked. i love midi 🙂

  7. i’m hoping whoever is developing this will take a look at advanced OSC architectures like Ossia (https://ossia.io/libossia/) and find some inspiration in that. we need a widely implemented universal network based architecture, something that can auto-populate lists of parameters for instruments/devices, and something that makes no expectation of the type of device you are controlling with. the architecture shouldn’t care if i’m using a midi keyboard, drum controller, a custom instrument in Max, or a DIY broomstick guitar controller.

    1. We are working on Property Exchange (PE) which is actually very similar to OSC Queries. With anything that supports Property Exchange you can universally get patch lists, parameter lists, controller lists and even auto generate graphic editors because it uses JSON. There will be demos happening at the Audio Developers Conference in London this month.

      1. excellent, thanks for your reply!

        it will be essential that instrument designers can create their own specific subset of parameters for their instrument (with labeling). for example, instead of 128 midi 7 bit CC parameters (some pre-defined to attack, pitch, etc.), we should have the capability to define an infinite number number of parameters with a much greater level of resolution, with multiple type of data (ms, hz, time, etc.).

        hosts/instruments will need to develop the capability to easily map these parameters in various ways, something which i guess is outside of your spec but should be considered in the process.

        we’ve had these kinds of capabilities for years with things like OSCQuery, Minuit, and more recently OSSIA, but the situation is completely fragmented, and only a few hosts are capable of working with any of these protocols. even making quick mappings from Max/MSP to Reaktor via OSC is really problematic, but the proposed PE protocol gives me optimism for the future.

        looking forward to learning more after the conference.

  8. whatever happens with MIDI in the future, please retain the ground isolation aspects of DIN MIDI somehow. USB MIDI causes ground loops, FAIL. We all would love increased bandwidth, but retain ground isolation FFS, if optical xfer is too slow maybe transformer isolation or something?

  9. Good conversation. From a user’s perspective….Auto populate is a wonderful idea. MPE is wonderful. Elektron has overbridge, Logic has their own MPE-like standard. I’d like to see a more straightforward way to work with microtuning that could be as simple as program change messages. Meaning I want to change tuning tables easily. Not just per patch. Having it in the spec would be nice. I know from experience that USB is sloppy, and Expert Sleepers has their solution to that, even though it looks convoluted and difficult to set up. As far as parameter resolution in hardware, I understand that has more to do with microprocessor speeds. Moog made good strides in their Phatty and Sub lines, and Roland got enough resolution in their Boutique line to give us 4 notes of polyphony with smooth performance. Lots of stuff to discuss, and I’m fine waiting another half-decade if that’s what it takes to get it right in the next implementation. I want MIDI 2.0 to last another 30 years without changes!

    1. No please not another standar that is supposed to set for all time.
      Original midi had limitations, that has been apparent for quite some time now.
      Fortunately the new midi standard will include 2 way communication and fall-back to the old standard when there is no reply. They could build it so that Midi 3, could include support for both Midi 1 and Midi 2 as fall-back. That way, they can develop the new standard quicker, by knowing if they forgotten some things and new things arises, they can do the midi 3 standard, and then the midi 4, and so on.

      Considering the amount of communication and processing that will have to be applied for the midi 2 standard, it will be possible to use Midi, as it still needs a host, and USB is more piratical when using computers, and after all, most mid connections will be done in a computer environment.

  10. “What a time to be alive and into synths!!!” ………is something we will say in a decade or so when they adopt midi 2.0. And never until then.

Leave a Reply