Tag Archives: Custom Built

Why I Think Steam Machines Are Cool

My audio-human mind races when thinking of high-performance, compact, affordable machines.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

steamWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Wait,” you’re thinking, “I thought this site was about live shows. Steam Machines are gaming devices.”

You’re right about that. What you have to remember (or just become aware of), is that I have a strange sort of DIY streak. It’s why I assembled my own live-audio console from “off the shelf” products. I really, really, REALLY like the idea of doing powerful things with concert sound via unorthodox means. An unorthodox idea that keeps bubbling up in my head is that of a hyper-customizable, hyper-expandable audio mix rig. It could be pretty much any size a user wanted, using pretty much whatever audio hardware a user wanted, and grow as needed. Also, it wouldn’t be too expensive. (About $900 per 16X16 channel “block.”)

When I look at the basic idea of the Valve Steam Machine, I see a device that has the potential to be a core part of the implementation.

But let’s be careful: I’m not saying that Steam Machines can do what I want right now. I’m not saying that there aren’t major pitfalls, or even dealbreakers to be encountered. I fully expect that there are enormous problems to solve. Just the question of how each machine’s audio processing could be conveniently user-controlled is definitely non-trivial. I’m just saying that a possibility is there.

Why is that possibility there?

The Box Is Prebuilt

The thing with prebuilt devices is that it’s easier for them to be small. A manufacturer building a large number of units can get custom parts that support a compact form factor, put it all together, and then ship it to you.

Of course, when it comes to PCs, you can certainly assemble a small-box rig by hand. However, when we’re talking about using multiple machines, the appeal of hand-building multiple boxes drops rapidly. So, it’s a pretty nice idea that a compact but high(er) performance computing device can be gotten for little effort.

The System Is Meant For Gaming

Gaming might seem like mere frivolity, but these days, it’s a high-performance activity. We normally think of that high-performance as being located primarily in the graphics subsystem – and for good reason. However, I also think a game-capable system could be great for audio. I have this notion because games are so reliant on audio behaving well.

Take a game like a modern shooter. A lot of stuff is going on: Enemy AI, calculation of where bullets should go, tracking of who’s shooting at who, collision detection, input management, the knowing of where all the players are and where they’re going, and so on. Along with that, the sound has to work correctly. When anybody pulls a trigger, a sound with appropriate gain and filtering has to play. That sound also has to play at exactly the right time. It’s not enough for it to just happen arbitrarily after the “calling” event occurs. Well-timed sounds have to play for almost anything that happens. A player walks around, or a projectile strikes an object, or a vehicle moves, or a player contacts some phsyics-enabled entity, or…

You get the idea.

My notion is that, if the hardware and OS of a Steam Machine are already geared specifically to make this kind of thing happen, then getting pro-audio to work similarly isn’t a totally alien application. It might not be directly supported, of course, but at least the basic device itself isn’t in the way.

The System Is Customizable

My understanding of Steam Machines is that they’re meant to be pretty open and “user hackable.” This excites me because of the potential for re-purposing. Maybe an off-the-shelf Steam Machine doesn’t play nicely with pro-audio hardware? Okay…maybe there’s a way to take the box’s good foundation and rebuild the upper layers. In theory, a whole other OS could be runnable on one of these computers, and a troublesome piece of hardware might be replaceable (or just plain removable).


I acknowledge that all of this is off in the “weird and theoretical” range. My wider goal in pointing it out is to say that, sometimes, you can grab a thing that was intended for a different application and put it to work on an interesting task. The most necessary component seems to be imagination.


Why I Am (Not) Interested In The Industry Standard

Industry standards are helpful reference points, but are not necessarily the best possible approach.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

57grillsmallWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Remember my article about a patch-scheme for a “festival style” show? It actually raised an eyebrow or two. A fellow audio-human (who works on much, much, much larger shows than I do) asked me why my patch list was backwards from what everybody else does. His concern was that, in the festival situations he finds himself in, my “upside down” patch would monkeywrench things if accommodated. It would just be so much easier for everyone if I followed the industry standard of (I guess?) starting with the drums – “kick is channel 1,” in other words.

My response was that, if I had things laid out one way, and a guest engineer came in who wanted them to be another way, then I would be happy to set up any softpatch desired. What I neglected to add at the time was that, if I was “that one guy” where everyone else wanted a different order, I would be happy to just use the standard patch. It wouldn’t ruin my day at all, and it would make things easier for everybody else.

To be open and frank, though, there was something else I wanted to say. I censored myself because I think there’s a place for diplomacy and courtesy, especially when the conversation venue (Facebook comments) isn’t really good for nuance.

What I wanted to say was, “Because my way is better. Why would you put the drums first? They’re the bottom of the priority list.” (The drums are important, but in a small-venue context they usually need the least help from the PA to be in the right spot.)

What was said and unsaid in that conversation is a microcosm of how I feel about industry standards. There are industry standard mics, techniques, PA styles, stage layouts, and whatever else, and they exist for good reasons. Knowing what those reasons are is a good thing, because it’s part of understanding the craft. At the same time, though, industry standards rarely equate to “the best.” They tend to equate to “works acceptably in a wide range of situations.”

58, 57, IBM

Back when Apple Computer was struggling for acceptance, there was a saying: “Nobody every got fired for buying IBM.” IBM was the industry standard for machines used in an office environment, and even though the Macintosh computers at the time were leaps and bounds ahead in terms of user-friendliness, people kept buying IBM and compatible devices.

Why?

Because IBM was known. Large numbers of people, from the users to the admins, had experience with them. Everybody knew what to expect. They knew that appropriate software would be available, or could be developed by folks that were easy to find. They knew the parts would be there. They knew they could get work done with IBM, even if the computers weren’t revolutionary. They knew that IBM was readily respectable by everyone that they wanted to impress.

In the same way, you could say that “Nobody ever got fired for buying SM-58s and SM-57s.” They’re industry standard mics because they’re built to withstand live shows, basically sound like what they’re pointed at, and literally everybody can get them to work in a reasonable way. They’ve been around forever, and have been used by everybody, their dog, and their dog’s fleas. Even if somebody doesn’t know the model numbers, asking them to draw a picture of a vocal mic and an instrument mic will probably get you an SM-58 and an SM-57.

But they’re not the best at all times. I’ve heard a lot of 58s that imparted far too much low-mid garble to a singer’s voice, and I’ve never once easily gotten as much gain-before-feedback out of a 58 as I have an ND767a. I’ve miced up tons of amplifiers with all kinds of mics that weren’t SM-57s, and I’ve been perfectly happy about 99% of the time. I’ve done the same with drums. If “sounds decent” is the main priority, then I have a bunch of mics that do that AND take up less space than a big ol’ 57. There are other mics out there that work better for me, in terms of the total solution offered.

This isn’t to say that great things can’t happen with the SM series! I once heard an artist in a coffee shop with a keyboard amp and a 58-style mic. It was the most perfect setup for her voice that you could imagine. I wasn’t expecting what I heard, but she made it work beautifully. Sometimes, “industry standard” and “perfect for this particular application” DO line up.

My point is, though, that in a broad sense the “hidden secret” of being industry standard means being “extraordinarily average.” Thoroughly inoffensive. Safe. Something people won’t be fired for specifying and purchasing.

There’s nothing wrong with that, but for people like me…well, it’s kinda boring.

Sometimes You Need To Be Bored

That last sentence might seem a bit incendiary, depending on who you are. It’s very important to note that being un-boring is a luxury that’s unavailable to many in this business.

A good example is what happens when a venue wants to spend time working with acts that regularly tour at the regional level or above. To be acceptable to those acts (especially if they bring production techs but only minimal gear) requires that the PA and lighting rigs be easy to handle by most folks. The personnel working for the house might be excited about the new mixing consoles that lack a physical control surface, but that’s not something that everybody is prepared to accept. There are plenty of audio humans who just aren’t ready for the idea of having no physical controls at all, whereas probably every sound tech is fine with a console that has a control surface. That’s why control surfaces are still the industry standard. The new surfaceless consoles are nifty, but not for everybody, so a bit of “boring-ness” is required in order for the venue to play well with others.

Industry standards are accepted everywhere, which makes them a safe bet. Non-standards are “risky,” because they tend to conform to the desires of a smaller number of people. Risky is often exciting, however, because that’s where innovation occurs. Iterating on the standard makes the standard more refined, but it rarely produces breakthroughs. It’s entirely possible to, say, “bend the rules” on mixing console cost vs. functionality if you’re willing to do weird things (like dispense with a control surface). Some people will get it, and some people will think you’re crazy. Catering to your own brand of crazy is acceptable if, like me, a guest engineer even being in the room only happens about 0.8% of the time. It’s not acceptable at all if a band tech is going to be “driving” on a regular basis.

Why I’m Not Particularly Interested In The Industry Standard

I personally tend to shrug my shoulders at industry standards for the same reason that people shrug their shoulders in general: There’s almost nothing exciting about what’s been done a million times. Since I currently don’t have to meet riders or provide an easy environment for other techs to work in, I have the luxury of basically doing whatever I want as long as it works.

I love giving “upstarts” and bargain items a chance, because it’s fun to see just how far a piece of gear can go if you spend some time with it.

I don’t fight feedback with per-mix graphic EQs, because the idea of hacking up a whole mix to solve a problem with one input seems crazy to me.

I use a homebrew console because I wanted to have a virtual, independent monitor-world, and nobody made a traditional console I could afford that would do that in the way I wanted.

I don’t use a control surface for mixing because I’ve never cared about moving a whole bunch of faders at once.

I’ve never personally owned an SM-58 or 57, because they just aren’t interesting to me.

I’ve stuffed a cheap measurement mic inside a kick drum on several occasions, because I wanted to see how it would work. (It was actually pretty okay.)

And I just generally roll my eyes at how so much of show production, which used to be a kind of “outlaw” business that pushed boundaries and did things for the fun of it, has become a beige, corporatized affair of trying to basically be like everybody else. It’s like cars, you know? They used to be cool, distinctive works of art, and now every car company is essentially making the same three boring-as-dirt sedans, three bland SUVs, and three unremarkable pickup trucks, because it’s all run by “money” people now who are terrified of not being more profitable next quarter and thus will never do anything interesting YOU GUYS LET ME KNOW IF I’M RAMBLING, ‘KAY?

Now, you can bet that, if I ever went to work at an AV company or production provider, I would be willing to conform to industry standards. In that environment, that would be the appropriate thing to do.

But right now, I have the freedom to be weird and have fun – so I intend to enjoy myself.

I’ll say it again. “Industry standard” doesn’t necessarily mean “the best.” It just means “people will accept this about 95% of the time.”


Not Remotely Successful

Just getting remote access to a mix rig is not a guarantee of being able to do anything useful with that remote access.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

sorrytabletsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The nature of experimentation is that your trial may not get you the expected results. Just ask the rocket scientists of the mid-twentieth century. Quite a few of their flying machines didn’t fly. Some of them had parts that flew – but only because some other part exploded.

This last week, I attempted to implement a remote-control system for the mixing console at my regular gig. I didn’t get the results I wanted, but I learned a fair bit. In a sense, I think I can say that what I learned is more valuable than actually achieving success. It’s not that I wouldn’t have preferred to succeed, but the reality is that things were working just fine without any remote control being available. It would have been a nice bit of “gravy,” but it’s not like an ability to stride up to the stage and tune monitors from the deck is “mission critical.”

The Background

If you’re new to this site, you may not know about the mix rig that I use regularly. It’s a custom-built console that runs on general computing hardware. It started as a SAC build, but I switched to Reaper and have stayed there ever since.

To the extent that you’re talking about raw connectivity, a computer-hosted mix system is pre-primed for remote control. Any modern computer and accessible operating system will include facilities for “talking” to other devices over a network. Those connectivity facilities will be, at a basic level, easy to configure.

(It’s kind of an important thing these days, what with the Internet and all.)

So, when a local retailer was blowing out 10″ Android tablets for half price, I thought, “Why not?” I had already done some research and discovered that VNC apps could be had on Android devices, and I’ve set up VNC servers on computers before. (It’s not hard, especially now that the installers handle the network security configuration for you.) In my mind, I wasn’t trying to do anything exotic.

And I was right. Once I had a wireless network in place and all the necessary software installed, getting a remote connection to my console machine was as smooth as butter. Right there, on my tablet, was a view of my mixing console. I could navigate around the screen and click on things. It all looked very promising.

There’s a big difference between basic interaction and really being able to work, though. When it all came down to it, I couldn’t easily do the substantive tasks that would make having a remote a handy thing. It didn’t take me long to realize that tuning monitors while standing on the deck was not something I’d be able to do in a professional way.

A Gooey GUI Problem

At the practical level, the problem I was having was an interface mismatch. That is, while my tablet could display the console interface, the tablet’s input methodology wasn’t compatible with the interface being displayed.

Now, what the heck does that mean?

Reaper (and lots of other audio-workstation interfaces) are built for high-precision pointing devices. You might not think of a mouse or trackball as “high precision,” but when you couple one of those input devices with the onscreen pointer, high precision is what you get. The business-end of the pointer is clearly visible, only a few pixels wide, and the “interactivity radius” of the pointer is only slightly larger. There is an immediately obvious and fine-grained discrimination between what the pointer is set to interact with, and what it isn’t. With this being the case, the software interface can use lots of small controls that are tightly packed.

Additionally, high-precision pointing allows for fast navigation across lots of screen area. If you have the pointer in one area of the screen and invoke, say, an EQ window that pops open in another area, it’s not hard to get over to that EQ window. You flick the mouse, your eye finds the pointer, you correct on the fly, and you very quickly have control localized to the new window. (There’s also the whole bonus of being able to see the entire screen at once.) With high-precision input being available, the workstation software can make heavy use of many independent windows.

Lastly, mice and other high-precision pointers have buttons that are decoupled from the “pointing” action. Barring some sort of failure, these buttons are very unambiguous. When the button is pressed, it’s very definitely pressed. Clicks and button holds are sharply delineated and easily parsed by both the machine and the user. The computer gets an electrical signal, and the user gets tactile feedback in their fingers that correlates with an audible “click” from the button. This unambiguous button input means that the software can leverage all kinds of fine-grained interactions between the pointer position and the button states. One of the most important of those interactions is the dragging of controls like faders and knobs.

So far so good?

The problem starts when an interface expecting high-precision pointing is displayed on a device that only supports low-precision pointing. Devices like phones and tablets that are operated by touch are low-precision.

Have you noticed that user interfaces for touch-oriented devices are filled with big buttons, “modal” elements that take over the screen, and expectations for “big” gestures? It’s because touch control is coarse. Compared to the razor-sharp focus of a mouse-driven pointer, a finger is incredibly clumsy. Your hand and finger block a huge portion of the screen, and your finger pad contacts a MASSIVE area of the control surface. Sure, the tablet might translate that contact into a single-pixel position, but that’s not immediately apparent (or practically useful) to the operator. The software can’t present you with a bunch of small subwindows, as the miniscule interface elements can’t be managed easily by the user. In addition, the only way for the touch-enabled device to know the cursor’s location is for you to touch the screen…but touch, by necessity, has to double as a “click.” Interactions that deal with both clicks and movement have to be forgiving and loosely parsed as a result.

Tablets don’t show big, widely spaced controls in a single window because it looks cool. They do it because it’s practical. When a tablet displays a remote interface that’s made for a high-precision input methodology, life gets rather difficult:

“Oh, you want to display a 1600 x 900, 21″ screen interface on a 1024 X 600, 10″ screen? That’s cool, I’ll just scale it down for you. What do you mean you can’t interact with it meaningfully now?”

“Oh, you want to open the EQ plugin window on channel two? Here you go. You can’t see it? Just swipe over to it. What do you mean you don’t know where it is?”

“Oh, you want to increase the send level to mix three from channel four? Nice! Just click and drag on that little knob. That’s not what you touched. That’s also not what you touched. Try zooming in. I’m zoomi- wait, you just clicked the mute on channel five. Okay, the knob’s big now. Click and drag. Wait…was that a single click, or a click and hold? I think that was…no. Okay, now you’re dragging. Now you’ve stopped. What do you mean, you didn’t intend to stop? You lifted your finger up a little. Try again.”

With an interface mismatch, everything IS doable…but it’s also VERY slow, and excruciatingly difficult compared to just walking back to the main console and handling it with the mouse. Muting or unmuting a channel is easy enough, but mixing monitors (and fighting feedback) requires swift, smooth control over lots of precision elements. If the interface doesn’t allow for that, you’re out of luck.

Control States VS. Pictures Of Controls

So, can systems be successfully operated by remotes that don’t use the same input methodology as the native interface?

Of course! That’s why traditional-surface digital consoles can be run from tablets now. The tablet interfaces are purpose-built, and involve “state” information about the main console’s controls. My remote-control solution didn’t include any of that. The barrier for me is that I was trying to use a general-purpose solution: VNC.

With VNC, the data transmitted over the network is not the state of the console’s controls. The data is a picture of the console’s controls only, with no control-state data involved.

That might seem confusing. You might be saying, “But there is data about the state of the controls! You can see where the faders are, and whether the mutes are pressed, and so on.”

Here’s the thing, though. You’re able to determine the state of the controls because you can interpret the picture. That determination you’ve made, however, is a reconstruction. You, as a human, might be seeing a picture of a fader at a certain level. Because that picture has a meaning that you can extract via pattern recognition, you can conceptualize that the fader is in a certain state – the state of being at some arbitrary level of gain. To the computer, though, that picture has no meaning in terms of where that fader is.

When my tablet connects to the console via VNC, and I make the motions to change a control’s state, my tablet is NOT sending information to the console about the control I’m changing. The tablet is merely saying “click at this screen position.” For example, if clicking at that screen position causes a channel’s mute to toggle, that’s great – but the only machine aware of that mute, or whether that mute is engaged or disengaged, is the console itself. The tablet itself is unaware. It’s up to me to look at the updated picture and decide what it all means…and that’s assuming that I even get an updated picture.

The cure to all of this is to build a touch-friendly interface which is aware of the state of the controls being operated. You can present the knobs, faders, and switches in whatever way you want, because the remote-control information only concerns where that control should be set. The knobs and faders sit in the right place, because the local device knows where they are supposed to be in relation to their control state. Besides solving the “interface mismatch” problem, this can also be LIGHT YEARS more efficient.

(Disclaimer: I am not intimately aware of the inner workings of VNC or any console-remote protocol. What follows are only conjectures, but they seem to be reasonable to me.)

Sending a stream of HD (or near HD) screenshots across a network means quite a lot of data. If you’re using jpeg-esque compression, you can crush each image down to 100 kilobytes and still have things be usable. VNC can be pretty choosy about what it updates, so let’s say you only need one full image every second. You won’t see meters move smoothly or anything like that, but that’s the price for keeping things manageable. The data rate is about 819 kbits/ second, plus the networking overhead (packet headers and other communication).

Now then. Let’s say we’ve got some remote-control software that handles all “look and feel” on the local device (say, a tablet). If you represent a channel as an 8-bit identifier, that means you can have up to 256 channels represented. You don’t need to actually update each channel all the time to simply get control. Data can just be sent as needed, of course. However, if you want to update the channel meters 30 times per second, that meter data (which could be another 8-bit value) has to be attached to each channel ID. So, 30 times a second, 256 8-bit identifiers get 8-bits of meter information data attached to each of them. Sixteen bits multiplied by 256 channels, multiplied by 30 updates/ second works out to about 123 kbits/ second.

Someone should check my math and logic, but if I’m right, nicely fluid metering across a boatload of channels is possible at less than 1/6th the data rate of “send me a screenshot” remote control. You just have to let the remote device handle the graphics locally.

Control-state changes are even easier. A channel with fader, mute, solo, pan, polarity, a five-selection routing matrix, and 10 send controls needs to have 20 “control IDs” available. A measly little 5-bit number can handle that (and more). If the fader can handle 157 “integer” levels (+12 dB to -143 dB and “-infinity”) with 10 fractional levels of .1 dB between each integer (1570 values total), then the fader position can be more than adequately represented by an 11-bit number. If you touch a fader and the software sends a control update every 100th of a second, then a channel ID, control ID, and fader position have to be sent 100 times per second. That’s 24 bits multiplied by 100, or 2.4 kbits/ second.

That’s trivial compared to sending screenshots across the network, and still almost trivial when compared to the “not actually fast” data rate required to update the meters all the time.

Again, let me be clear. I don’t actually know if this is how “control state” remote operation works. I don’t know how focused the programmers are on network data efficiency, or even if this would be a practical implementation. It seems plausible to me, though.

I’m rambling at this point, so let me tie all this up: Remote control is nifty, and you can get the basic appearance of remote control with a general purpose solution like VNC. If you really need to get work done in a critical environment, though, you need a purpose built solution that “plays nice” at both the local and remote ends.


Experiments Are For Discovery

Don’t do experiments to save money. Do experiments to learn things and get maximum ownership.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

If you yourself aren’t crazy enough to want to build your own amplifier, or construct your own loudspeaker, I’m betting that you know somebody who does. Hey, you know me, and I built my own digital mixing console. That’s pretty “out there” for most audio folks.

The reason people get these bats in their belfries is because building things is fascinating. You get to figure out what actually makes audio gear work – you get a hands-on trip through the actual tradeoffs that industry designers have to handle.

That’s the point of doing experiments: Learning something.

I’ve seen something unfortunate surrounding these endeavors, though. There’s a tendency for people to get into these projects solely for the purpose of trying to save money. When they discover (in one way or another) that doing an experiment is highly likely to actually cost more than buying a finished project, they bail out. Any excitement they had is completely wrecked.

It’s sad, really.

Makin’ Sawdust

It’s pretty easy for folks to get taken in by websites promising that you can build a superior loudspeaker for less than what it costs to buy one outright. The problem with the assertion is that it forces a lot of assumptions onto both the builder and the project:

  • It assumes that the builder knows how to use the necessary tools.
  • It assumes that the builder has the tools handy, or can obtain them for little cost.
  • It assumes that the tradeoffs made in the project design to allow for inexpensive components are well-understood by the builder.

On that last point, there’s one site for speaker enclosure plans that repeatedly touts how the designs outperform far more expensive models. The thing is that the supplied designs DO outperform their commercial counterparts – but only in one area. The DIY speakers are great if you want to get the maximum per-watt output available from inexpensive drivers, but not so great if you want deep LF (low frequency) extension and consistent overall response.

Once you couple the above with having to buy your own tools and deal with your own construction mistakes, you’ve pretty much burned any monetary advantage you might have had. There’s also the whole problem of how amplification and processing costs have dropped like a rock…as long as those components have been engineered into the actual speaker enclosure. If not, you have to provide that externally, which further drives up the cost of your homebrew project.

Now, sure, you might be able to find a sweet-spot where you can build a box with higher-end parts at a good price. If you’re not trying to maximize profit, and you’re willing to ignore the effective cost of your own labor, then you just might manage to save a few bucks in some way. It’s all just a game of moving the numbers around, though, where you can conveniently sweep certain costs under the perceptual rug.

That’s why “doing it cheaper” shouldn’t be the goal. The goal should be to have fun, learn something about woodworking, get a feel for what works and doesn’t in loudspeaker design, and ultimately have something in your hands where you can say, “I MADE this.” That’s where the real value is – and that value is far in excess of the few bucks you might save if you get lucky.

Console Yourself

Get it? “Console” yourself? It’s a play on…anyway.

In a purely “cash” sense, I did effectively save some money by building my own mixing system. To get fundamentally equivalent functionality and I/O, I would have had to spend about $1000 more than what the build cost. However, it’s important to point out that other, no less important expenses had already been made.

I already knew about the construction, care, and feeding of DAW computers.

I already knew enough about computers in general to be my own tech support.

I already knew enough about signal flow that I could effectively set up my own console configuration.

I already had enough overall experience to know what I wanted, and be able to actually leverage the advantages of the system.

I already had a spare console if something went wrong.

The value of all that goes beyond $1000. Several times over.

Again, though, that’s not the point of building your own digital console. The point is that you get to have a rig that’s truly yours – that you’re responsible for. You get to pick the compromises that you’re willing or not willing to make. You get to be the “proud parent.” You get to discover what it’s actually like to run a system with a custom front-end.

There was a time when pro-audio gear was something that you essentially had to construct yourself. It wasn’t a commoditized industry like it is now. These days, though, economies of scale make it vastly cheaper to buy things off the shelf when compared to doing your own build.

As a result, you shouldn’t do DIY experiments to save money. You should do them because they’re awesome.


UI Setup For A Custom Console

When setting up your own console layout, usability and easy access are key considerations.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

This video is an overview of the major tips, tricks, and tactics involved in setting up a software console interface for live-audio. Building your own console layout from scratch can be a bit challenging, but it also allows you a LOT of freedom.

Also, if you’re using Reaper (or have software that allows custom track icons), you can download my “number” icons here.


Why I Left SAC

I switched to Reaper from SAC because I wanted more flexibility to define my own workflow.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

If you know me, you know that I’m a HUGE fan of my custom-built digital console. It has routing flexibility like nothing else I’ve ever worked with, is far less subject to the whims of manufacturers, and generally lets me do things that are difficult or even impossible with other setups.

What you may not know is that I didn’t always use Reaper as the main software package. I started off with SAC. I was actually very happy with SAC for a while, but the “bloom came off the rose” after a few frustrations popped up.

Don’t Get Me Wrong! SAC Is Rad

I won’t lie. I’m going to be pretty tough on SAC in this article.

The point isn’t to bash the program though.

Software Audio Console is a really neat, purpose-built labor of love. If nothing else, it shows that a reliable, live-sound-capable console can be run on a general-purpose computing platform. It has some great features and concepts, not the least of which is the “separate monitor console for each performer” workflow. That feature, coupled with integrated remote-control support, can potentially be VERY killer for the tech that works for professional bands who carry their own production. (Set everybody up with a remote, let ’em mix their own monitors, you run FOH, and life is dandy. Well, until one of the players causes a massive feedback spike. Anyway…)

SAC is efficient. SAC’s overall control scheme is great for live-audio, most of the time. SAC is stable and trouble free. SAC has very usable snapshot management. Using ASIO4All as a separate driver, I was able to use SAC for live mixing and Reaper for recording, with Reaper effectively running in the background.

SAC is a good piece of software.

If there’s any problem with SAC, it’s that the program is overly influenced by its developer’s (Bob Lentini) personal preferences and workflow. If you want something markedly different, you’re out of luck.

It Started With An EQ

I’m a massive fan of Reaper’s native EQ plug. The only thing it’s missing is variable slope for the high and low pass filters. I honestly don’t know why anyone would want to buy an expensive, annoyingly copy-protected EQ plugin when Reaper’s EQ is so powerful.

Yup. I’m a bit of a fanboy. Not everyone may share my opinion.

Anyway.

Wanting to use Reaper’s EQ with SAC is what quickly revealed a “blind spot” with SAC’s workflow. I found out that adding FX to a channel was a bit clumsy. I also found out that manipulating FX on a channel was almost horrific.

To instantiate FX on a SAC channel, you have to find the FX control, click on it to get the channel FX chain to pop up, then use an un-filterable list of all available FX to find the one you want, click “Add,” and hope that you’ve gotten the chain order right.

If you didn’t get the order of the chain right, you have to de-instantiate one of the plugs and try again.

In Reaper, plugin instantiation can happen by clicking the insert stack, picking a plug from a filterable and customizable list, and…that’s it. If you got the plugin in the wrong spot, you can just drag it into the right one.

That may not seem like a huge difference, but the annoyance factor of SAC’s clumsiness accumulates greatly over time.

On the live-manipulation side, Reaper is leaps and bounds ahead. If I need to tweak an EQ on the fly (which happens every show, many times), all I have to do is click on the EQ plug in the stack. Immediately, the EQ pops its UI into view, and I can get to work.

In SAC, on the other hand, I have to (again) find the FX control, click to open the channel FX list, find the EQ, then double-click on it in the list to get the GUI to display. A few extra clicks might not seem like much, but this truly becomes a very awkward slog in a big hurry. In fairness, SAC does have a channel EQ that is VERY much more immediate, but what that ended up forcing me to do was to run my beloved plug as a “basic” EQ, and use the channel EQ for everything else. I’m not bothered by complexity, but unnecessary complexity IS something that I dislike.

There’s also SAC’s stubborn refusal to recognize that drag-and-drop is totally “a thing” now. In Reaper, I can drag plugins and sends between channels. In SAC, you can’t drag anything to any other channel. You can drag channels into a different order, but it’s not simple to do. (Some more unnecessary complexity). In general, dragging things around and multiselecting in Reaper works exactly as you would expect, whereas SAC tends to be VERY finicky about where your mouse cursor is and what modifier key you’re using.

Artificial Scarcity and Workflow Lock-In

In a number of ways, SAC aims to provide a “crossover experience” to techs who are used to physical consoles. This is absolutely fine if it’s what you want, but going this route has a tendency to reduce flexibility. This loss of flexibility mostly comes from arbitrary limitations.

Most of these limitations have enough cushion to not be a problem. SAC’s channel count is limited to 72, which should be WAY more than enough for most of us in small-venue situations. With a SAC-specific workflow, six aux sends and returns are a lot more than I usually need, as are 16 groups and eight outputs.

The problem, though, is that you’re forced to adopt the workflow. Want to use a workflow that would require more than six sends? Tough. Want to use more than eight physical outputs on a single console? Too bad.

Again, there’s no issue if you’re fine with being married to the intended use-strategy. However, if you’re like me, you may discover that having a whole bunch of limited-output-count subconsoles is unwieldy when compared to a single, essentially unlimited console. You might discover that much more immediate channel processing access trumps other considerations. It’s a matter of personal preference, and the thing with SAC is that your personal preference really isn’t a priority. The developer has chosen pretty much everything for you, and if that’s mostly (but not exactly) what you want, you just have to be willing to go along.

Another “sort-of” artificial scarcity issue with SAC is that it’s built on the idea that multi-core audio processing is either unnecessary or a bad idea. The developer has (at least in the past) been adamant that multi-thread scheduling and management adds too much overhead for it all to be worth it. I’m sure that this position is backed up with factual and practical considerations, but here’s my problem: Multi-core computers are here to stay, and they offer a ton of available horsepower. Simply choosing to ignore all that power strikes me as unhelpful. I have no doubt that some systems become unreliable when trying to multiprocess audio in a pseudo-realtime fashion – but I’d prefer to at least have the option to try. Reaper let’s me enable multiprocessing for audio, and then make my own decision. SAC does no such thing. (My guess is that the sheer force of multi-core systems can muscle past the scheduling issues, but that’s only a guess.)

Where artificial scarcity REALLY reared its head was when I decided to try migrating from my favorite EQ to the “outboard” EQ plug included with SAC. I was happily getting it instantiated in all the places I wanted it when, suddenly, a dialog box opened and informed me that no more instances were available.

The host machine wasn’t even close to being overloaded.

I may just be one of those “danged entitled young people,” but it doesn’t compute for me that I should have to buy a “pro” version of a plugin just to use more than an arbitrary number of instances. It’s included with the software! I’ve already paid for the right to use it, so what’s the problem with me using 32 instances instead of 24?

I’m sorry, but I don’t get it.

There’s also the whole issue that SAC doesn’t offer native functionality for recording. Sure, I understand that the focus of the program is live mixing. Like the EQ plugin, though, I get really put-off by the idea that I HAVE to use a special link that only works with SAW Studio (which is spendy, and has to be purchased separately) in order to get “native” recording functionality.

Push Comes To Shove

In the end, that last point above was what got me to go over to Reaper. I though, “If I’m going to run this whole program in the background anyway, why not try just using it for everything?”

The results have been wonderful. I’m able to access critical functionality – especially for plugins – much faster than I ever could in SAC. I can pretty much lay out my Reaper console in any way that makes sense to me. I can have as many sends as I please, and those sends can be returned into any channel I please. I can chain plugins with all kinds of unconventional signal flows. I can have as many physical outputs on one console as the rig can handle.

In Reaper, I have much more freedom to do things my own way, and I like that.

As I’ve said before, SAC gets a lot of things right. In fact, I’ve customized certain parts of Reaper to have a SAC-esque feel.

It’s just that Reaper has the edge in doing what I want to do, instead of just doing what a single developer thinks it should do.


A Guide: DMX, Computers, and LED Light Fixtures

A walkthrough for building a computer-controlled lighting system from the ground up.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Back in 2010, or thereabouts, I was finishing a Bachelor of Science in Information Technology. To actually graduate, I had to prepare a capstone project – a cross-disciplinary work that would show that I had actually internalized my coursework. I decided that I wanted to apply my knowledge to the process of building a DMX lighting computer with a remote.

I did actually build and test a prototype system. Indeed, a simpler system based on my project is what I use to drive the lighting rig at Fats Grill.

At the time, my perception of the project was just that it was a way to finish my degree. The ironic result of this was that the manual, which was clearly written as a document to help lighting techs do things, was never actually given to anyone who would do something with it.

That’s changing today.

I went through the project, corrected some things, removed some overly specific bits, and saved it as a PDF.

It’s completely free – Click here to download.

You Should Try A Custom-Built Digital Console. Or Not.

Custom-made digital consoles have incredible power, but they aren’t for everybody.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

I’ve been a huge fan of digital consoles since about 2001. Back when I was studying at The Conservatory of Recording Arts and Sciences, it took one day in the digital studio to convince me that digital was the way to go. At the time, that room had two TMD-4000 consoles cascaded together. The functionality of those two consoles rivaled that of the much, much, much, much, much, much, (am I going to say, “much,” again? YES!), much more expensive SSL 4056 in the “A” room next door.

Now, I’m not here to argue about sonics. Having heard audio in both the digital room and Studio A, I can tell you that things sounded “just fine” in both places. Some folks might want to make a huge deal out of which consoles seem to sound better than other consoles. That’s not what I’m here to do. What I’m talking about here is functionality – the kinds of nifty tricks that different consoles can pull off.

Anyway, my first digital console was a DM-24. I now have two of them, actually.

I dropped the first one on concrete during an event load-in.

That DM-24 still works pretty well, surprisingly.

The Next Step

Fast-forward to 2011. I’m working at Fats Grill, and I’m tired of lugging my original, slightly-dinged-by-concrete DM-24 in and out of the place every week. (This was before I got my hands on the other DM, because it hadn’t been decommissioned yet. That’s another story.) It was time to get another console, but I couldn’t find anything I really liked at a price that I could justify.

Mostly, it was The Floyd Show’s fault.

This isn’t actually a tangent. Stick with me, folks.

See, we had featured the band, and the show had gone really well, but I had to submix a good chunk of their inputs. My DM was configured to act as both FOH and a virtual monitor console (more on that in another post), so I only had 15 channels that I could work with “natively” – with full, individual routing, and all that.

I wanted to be able to do the entire Floyd Show natively, on one console. I also wanted to keep full, virtual monitor console functionality. If I could do that, I figured that I could do the same for any other band that came through.

There were consoles in my price range with all the necessary analog inputs, but not enough actual channels or routing wizardry to do the virtual monitor thing. I also wasn’t fond of their overall implementation.

The single or cascaded console solutions that would do what I wanted were more than I could justify spending.

What’s a guy to do?

As it turns out, the next step in the “more bang for the buck” digital progression is to build your own console, using off-the-shelf audio interfaces and preamps. General computing platforms (like Windows) run on hardware that’s now powerful enough to stay responsive while handling lots of audio processing. That same hardware and software can also be made plenty reliable enough to function in a mission-critical environment like sound reinforcement.

The Magic

I ended up building a 24-input, 24-output rig, which originally ran Software Audio Console. I’ve since switched to Reaper, with some custom setup work to make the software more friendly to live work. (The “why” of that switch will be yet another post). On this kind of rig, the functionality available to an audio tech is extensive:

  • You can have independent FOH and monitor consoles in one box. The monitor console can be completely independent of FOH – aside from your preamp gains – or you can make it dependent on FOH processing by making some routing changes. You could even make the monitor console dependent on only part of the FOH processing stack, if you’re willing to do some fancier routing.
  • You could conceivably have multiple monitor consoles, configured independently. You could have multiple FOH consoles if you so desired. The only limit is how much processing the computer platform can do at an acceptable latency.
  • You can have as many monitor sends, mix feeds, and cue buses as you have physical outputs available.
  • Any regular channel on the console can have sends or be configured as a bus receive. Any channel. If you need full matrix output functionality, all you have to do is add the appropriate sends to the appropriate channels that are receiving other channels and feeding an output. If you need another bus, you just add one.
  • Since all your console outputs and buses can be regular channels, you can insert any processing on those channels that you please. None of this, “you can’t have that kind of EQ in that context because the engineering team didn’t think it was really important” stuff.
  • Drag and drop is available for all kinds of things. If you want to copy an EQ configuration to another channel, you just grab the EQ plug that’s setup properly, plop it into the target channel’s stack, delete the old EQ, and drag the new EQ to the proper spot. You can do the same for sends.
  • The channel processing stack is incredibly configurable. If you want an EQ to come before a compressor, you can make that happen. If you change your mind, you can reorder the channel processing stack by drag and drop. If you want to have a special EQ that wasn’t part of the main audio chain, but instead does something wild with a parametric filter and then passes its output to a gate key or compressor sidechain, you can do that. You can have two extreme EQ setups that process in parallel. You can have a delay and reverb on a single channel that process in parallel, so that You don’t have to use two buses to address them.
  • For channel processing, you can use any plugin you want – as long as you don’t add noticeable latency to the system of course. The “native” plugs that come with Reaper are killer, by the way:
    • The gate has a key input, hysteresis, and can be made into an expander with a simple adjustment to a “dry signal” control.
    • The compressor has a sidechain input, and also has a “dry signal” control, which means you can do parallel compression right in a single channel.
    • The EQ has as many bands of EQ as you want. It includes peaking, shelving, notch, bandpass, and hi/ low pass filters.
  • You can have permanent groups for channel faders and mutes, or you can get a temporary group by just multi-selecting what you want. (In fact, I use the temporary grouping a lot more than the assigned group functions.)
  • You can save as many mixes and projects as the host computer can hold, with any system-legal filename that you want, in any hierarchy that you want.
  • You can set up a VNC-based remote control system, as long as doing so doesn’t overload the system’s ability to process.
  • Since the whole thing is driven by an audio interface, you can always swap for another one if the current unit has an issue, or you want to try something different.
  • If you want more I/O, all you have to do is get an interface with more I/O, or cascade the current unit if that’s supported. You’re not tied to a manufacturer’s choice as to how much connectivity to include.
  • If you want a control surface, you can add one. You have all kinds of choices, from cheap to extravagant.
  • If the basic controls break, mice, trackballs and keyboards are only slightly more expensive than dirt. In the same vein, as long as you have a pointing device and keyboard attached, you effectively have a fallback control surface if the fancy one has a problem.
  • If you want a better screen, you can get one. Or two. Or as many as your video card can support.
  • You can multitrack record any show at any time, at a moment’s notice. You can even record to max-quality OGG files, and save a lot of disk space without a huge loss in audio quality.
  • You could do an automated mix if you wanted, with a bit of planning and setup.

I’m sure that, somewhere, you can get a prebuilt digital console with all of this functionality. I just can’t think of anywhere that you can get it for less than $20,000 or so. If I remember correctly, the complete build price for the rig that I’ve just described is about $3000.

What To Be Careful Of

With everything I’ve laid out in the list above, you can probably tell that I’m pretty sold on this whole concept. Having all the functionality that my rig provides means that I can do all kinds of things that aren’t really expected in a small venue context – the most notable thing probably being that I have an independent monitor console, and lots of mixes to work with.

Even with all the positives, it’s important that I tell you about the risks and, shall we say, contraindications for putting together a rig like this:

  • This probably should not be your first mixing console. All the options and flexibility can be overwhelming for people who are just starting to learn the craft of live audio.
  • If big chunks of the terminology I’ve used above seem foreign to you, you should definitely do some homework before you try one of these rigs. Otherwise, you may be bewildered, or start doing things without knowing why you’re doing them.
  • If you don’t have a great grasp of how signal flows in a mix rig, this kind of setup isn’t the right choice. A lot of the system’s magic comes from being able to throw audio around in all kinds of ways, and you need to know exactly what you’re doing and why you’re doing it. (I would rate myself as having professional-level competence in terms of understanding signal flow, and I can still back myself into a corner when I forget to think things through.)
  • If your mix rig needs to be used by lots of different audio techs, especially BEs (Band Engineers), this kind of mix system is a bad choice. Very few people use them at the moment, and they’re not what most BEs expect when they roll up to a venue.
  • Rigs like this aren’t likely to be acceptable on riders anytime soon.
  • If you aren’t comfortable with digging around in computer hardware and software, you should think twice about diving into a rig like this.
  • If you don’t have any experience with installing DAW hardware and software, and what can go wrong with DAW setups, you should allow a lot of time for getting your rig running. Or, just get a traditional console.
  • If you aren’t keen on doing your own testing, this kind of system probably isn’t for you.
  • If you can’t get comfortable with the idea that there’s no support except for yourself and what you can find online, this idea is probably something to skip.
  • If you’re absolutely sold on working a lot of controls at the same time, you either need to attach and configure a really good control surface, or just get a regular console.
  • These rigs tend to be a bit slower to operate than traditional consoles, in terms of user interface. If you’re not okay with that, you either need to put in a good control surface, or just stick with what you’re fast on.
  • Even though you can save money overall on these systems, you need to spend dough on the important bits. USB interfaces are cheap, but getting decent latency out of them can be hard or even impossible. Firewire or PCI is the way to go.

With all that said, I just can’t help but be a bit giddy about how unconventional and powerful systems like these can be.

I’ll even help you build one, if you’re willing to throw some money my way. 🙂