API design: Presets, pre-patched, normalized, or modular

Making APIs is both easy and hard. See, it’s one thing for a software engineer to make a piece of software aimed at a musician or artist who is deeply scared of what horrors lurk inside the chassis, because it’s often times coming from a fear that you may have never understood. Writing APIs is different because you presumably are writing for someone who is closer to a like-minded individual.

Except that sometimes people forget that if they wanted the API to be deeply tied to having domain knowledge of the underlying system, they’d probably end up writing the underlying system themself. APIs are accelerators. They enable someone to ignore the tricky non-core parts of their problem.

In the very very very early days of musical synthesizers… well, synthesizers weren’t actually anythng more than a rack of electronic hardware that happened to do musically interesting things. But in the late 60s, you could buy a Moog modular system, that would have a bunch of modules that you could wire together and make music with. They were mostly a pile of switches and knobs and a whole lot of 1/4” audio cables.


This turned out to be a brilliant day for music, as people could create new tones, but was also a major headache for the musician. See, for a musician to get musical sounds out of their brand-new synth, they needed to set a bunch of knobs and connect a bunch of cables so that the keyboard gets plugged into at least one of the oscillators (the module that makes the sound) and makes it to at least one filter (that controls the timbre of the sound) and then makes it into at least one amplifier (that makes the note silent when you are not playing but makes it sound when you hold the key down). And then to make really interesting sounds, you’d generally need to plug more things in.

Now, there’s generally a standard order of how things are plugged in. So, in the interests of having fewer cables hanging around, they eventually started to make synthesizers more modular, such that things were set up in a reasonable fashion already. Maybe there were switches or at least more conveniently located plugs.

Eventually, even that became a big problem, so they started making synthesizers that were fully pre-patched with very few options and no ways to insert a plug anywhere into the signal chain. You have to also remember that the first synths were designed to play one note at a time. Trying to make a modular unit that could handle 8 or more notes at the same time was a big pain, and there’s really only one production modular unit that tried to do that. Furthermore, to avoid the trouble of trying to quickly reconfigure cables mid-concert, they started to put microcontrollers inside of keyboards so the entire suite of settings could be stored and retrieved with a small number of button presses.

By the 80s, most synths were cute sexy things made of piano keys and buttons and maybe some knobs that you think of when I say ‘synthesizer’ instead of a bunch of racks of gear. In more recent years, synths have moved from a hardware device to something that runs in software, which means that you can now make the free selection of software synthesizer topology. As it turns out, everybody hasn’t moved back towards the use of fully-modular synths. Most people still just want to load up a synth, maybe scroll through some presets, and bang on the keys.

Now, I’m anticipating having to make this argument in the near-term future, so I’d like to draw an analogy to synth design and API design.

See, you can make everything modular and connect everything with the API equivalent of wires an even allow people to bring their own modules. But when you do this, you decrease the chances that someone is going to want to go through all of that effort to wade through the layers of wiring to actually use the API.

You can avoid all of that complexity and have a few functions and hide everything. Frequently, this is a perfectly valid option, but you end up loosing interest when someone has their own way of doing things that they’d like to integrate.

You can provide presets, either in the form of an API function that builds a sequence of items to do something useful. But that can get tricky, because synthesizers generally don’t change, but software does, so keeping presets seamlessly updated becomes a huge problem.

Or you can make things normalized, where it automatically works in the simplest possible fashion and then provides you with a way to plug in or swap-out pieces as necessary.

But I do know, even though I’m perfectly willing to deal with a fully modular environment and understand what each of the pieces does, I still find myself not wanting to patch a bunch of synth modules together before I get a sound; I want to start with it set up in a reasonable configuration.