Tag Archives: Computers

Why I Think Steam Machines Are Cool

My audio-human mind races when thinking of high-performance, compact, affordable machines.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

steamWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Wait,” you’re thinking, “I thought this site was about live shows. Steam Machines are gaming devices.”

You’re right about that. What you have to remember (or just become aware of), is that I have a strange sort of DIY streak. It’s why I assembled my own live-audio console from “off the shelf” products. I really, really, REALLY like the idea of doing powerful things with concert sound via unorthodox means. An unorthodox idea that keeps bubbling up in my head is that of a hyper-customizable, hyper-expandable audio mix rig. It could be pretty much any size a user wanted, using pretty much whatever audio hardware a user wanted, and grow as needed. Also, it wouldn’t be too expensive. (About $900 per 16X16 channel “block.”)

When I look at the basic idea of the Valve Steam Machine, I see a device that has the potential to be a core part of the implementation.

But let’s be careful: I’m not saying that Steam Machines can do what I want right now. I’m not saying that there aren’t major pitfalls, or even dealbreakers to be encountered. I fully expect that there are enormous problems to solve. Just the question of how each machine’s audio processing could be conveniently user-controlled is definitely non-trivial. I’m just saying that a possibility is there.

Why is that possibility there?

The Box Is Prebuilt

The thing with prebuilt devices is that it’s easier for them to be small. A manufacturer building a large number of units can get custom parts that support a compact form factor, put it all together, and then ship it to you.

Of course, when it comes to PCs, you can certainly assemble a small-box rig by hand. However, when we’re talking about using multiple machines, the appeal of hand-building multiple boxes drops rapidly. So, it’s a pretty nice idea that a compact but high(er) performance computing device can be gotten for little effort.

The System Is Meant For Gaming

Gaming might seem like mere frivolity, but these days, it’s a high-performance activity. We normally think of that high-performance as being located primarily in the graphics subsystem – and for good reason. However, I also think a game-capable system could be great for audio. I have this notion because games are so reliant on audio behaving well.

Take a game like a modern shooter. A lot of stuff is going on: Enemy AI, calculation of where bullets should go, tracking of who’s shooting at who, collision detection, input management, the knowing of where all the players are and where they’re going, and so on. Along with that, the sound has to work correctly. When anybody pulls a trigger, a sound with appropriate gain and filtering has to play. That sound also has to play at exactly the right time. It’s not enough for it to just happen arbitrarily after the “calling” event occurs. Well-timed sounds have to play for almost anything that happens. A player walks around, or a projectile strikes an object, or a vehicle moves, or a player contacts some phsyics-enabled entity, or…

You get the idea.

My notion is that, if the hardware and OS of a Steam Machine are already geared specifically to make this kind of thing happen, then getting pro-audio to work similarly isn’t a totally alien application. It might not be directly supported, of course, but at least the basic device itself isn’t in the way.

The System Is Customizable

My understanding of Steam Machines is that they’re meant to be pretty open and “user hackable.” This excites me because of the potential for re-purposing. Maybe an off-the-shelf Steam Machine doesn’t play nicely with pro-audio hardware? Okay…maybe there’s a way to take the box’s good foundation and rebuild the upper layers. In theory, a whole other OS could be runnable on one of these computers, and a troublesome piece of hardware might be replaceable (or just plain removable).


I acknowledge that all of this is off in the “weird and theoretical” range. My wider goal in pointing it out is to say that, sometimes, you can grab a thing that was intended for a different application and put it to work on an interesting task. The most necessary component seems to be imagination.


Look Inside

If a piece of gear quits, you might actually be able to do something about it.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

lookinsidewebWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Along with “concert sound,” I also handle audio for my church. Just recently, we had a couple of audio processors decide that they didn’t want to cooperate anymore. One was an FBQ3102 equalizer, and the other was a DEQ2496 (a “not just an EQ but not quite a loudspeaker manager” sort of thing).

We had recently moved to a new space, and the big rack-o-audio had done some bouncing around in the back of a pickup. It had also been bumped down a set of stairs. Whether those were the actual events that precipitated the failures or not, everything together was just a little too much for the units in question. The 3102 stuck channel #1 in a permanent state of bypass, and the DEQ was making all kinds of weird, pink-noisey sounds at its outputs.

After the problems were discovered, I pulled the units from the rack on two separate days. (I won’t get into the full story just now.) On the first day, one of our congregants asked if I was going to send the DEQ off to be repaired. I responded in the negative, saying that I wanted to open it up and have a look first.

I had been inside the DEQ before, and I had a sneaky suspicion that the problem was NOT a failure of some device soldered to one of the boards. My guess was that a ribbon connector had worked loose, and just needed to get reseated. As it turned out the, simple act of pulling the unit out of the rack had gotten the connector to re-settle, and so my manual “pull and reseat” was just for good measure. The DEQ went back in the rack, and has returned to operating as expected.

I can’t remember when I got that DEQ, but I think it was something like seven years ago. Maybe more. They’re great units, and I haven’t found anything similar to them at their price point.

Anyway.

The FBQ was less cooperative. I got it opened up, and tried replugging its internal connections, but that wasn’t the problem. I eventually tried shorting the first pair of pins on the misbehaving channel’s “In/Out” switch – which made the bypass relay fire – but the unit just seemed to get worse and worse. I eventually shrugged my shoulders, replaced the FBQ with an old, faithful, Feedback Destroyer, and that was that.

So, what’s my point here?

The thesis of this article is the title, rendered in imperative form. If you have a piece of gear that seems to be misbehaving, then look inside.

Don’t Be Afraid Of The Wrong Things

It’s easy for us to be intimidated by the thought of pulling the cover off of a misbehaving device. We can’t be sure that we’ll understand anything we see. We won’t know what all the components do. Most of us are NOT electrical engineers. (I surely am not.)

The thing is, though, that not understanding everything about what’s going on inside a piece of gear doesn’t mean there’s nothing at all we can do to service one.

Now, before I go any further, yes, there are some things that you do need to be suitably wary of. Electricity can, and does, end people’s lives. If you’re going to pop open a device that’s giving you trouble, then realize that you’re doing so at your own risk. NOBODY BUT YOU is responsible if you injure or kill yourself, or wreck the unit. Even if you unplug the offending equipment, discharging a capacitor such that you are the path to ground is still a risk.

You have to be careful.

At the same time, being completely terrified of the insides of an audio device just isn’t necessary.

It’s Like Working On A Computer

I mean, at least a few of us have done work inside our computers, right? We’ve swapped out drives, or installed PCI cards, or even assembled a PC from the ground up, right? We don’t know about every little thing soldered to every board, and yet we’ve been able to get a box working. Lots of audio devices these days are assembled in what I would call “computer fashion.” You have various circuit boards that components attach to, a power supply, and various cables, slots, and other connectors that get the different sub-assemblies to exchange voltage. It’s all the same basic principles, it’s just arranged differently.

Yes, it’s possible for things soldered to a board to fail. There was that whole “Capacitor Plague” that Wikipedia describes as having gone on for eight years or so. There are such things as bad solder joints.

But here’s the overall view: A device soldered to a board is much less likely to have a bad connection than a join which relies on, say, friction, to keep things mated. Assuming the solder joint is sound, it’s pretty dang hard to disconnect an individual component from a circuit. You have to act deliberately and apply a sizable amount of heat to separate the part. A ribbon cable, on the other hand, can get loose from something as simple (and not deliberate) as vibration. The same thing can happen with a connection that works like a PCI card.

If you’ve got a piece of gear with a problem, popping the thing open and searching for “manually pullable” wiring assemblies and/ or snap-in components is probably worth your time. Reseating things like ribbon connectors and wiring harnesses helps to eliminate simple connection problems from the equation, and that just might solve the whole thing. There’s a real chance that looking at the easy stuff will save you a costly ship-repair-ship cycle.

So, look inside.


Not Remotely Successful

Just getting remote access to a mix rig is not a guarantee of being able to do anything useful with that remote access.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

sorrytabletsWant to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

The nature of experimentation is that your trial may not get you the expected results. Just ask the rocket scientists of the mid-twentieth century. Quite a few of their flying machines didn’t fly. Some of them had parts that flew – but only because some other part exploded.

This last week, I attempted to implement a remote-control system for the mixing console at my regular gig. I didn’t get the results I wanted, but I learned a fair bit. In a sense, I think I can say that what I learned is more valuable than actually achieving success. It’s not that I wouldn’t have preferred to succeed, but the reality is that things were working just fine without any remote control being available. It would have been a nice bit of “gravy,” but it’s not like an ability to stride up to the stage and tune monitors from the deck is “mission critical.”

The Background

If you’re new to this site, you may not know about the mix rig that I use regularly. It’s a custom-built console that runs on general computing hardware. It started as a SAC build, but I switched to Reaper and have stayed there ever since.

To the extent that you’re talking about raw connectivity, a computer-hosted mix system is pre-primed for remote control. Any modern computer and accessible operating system will include facilities for “talking” to other devices over a network. Those connectivity facilities will be, at a basic level, easy to configure.

(It’s kind of an important thing these days, what with the Internet and all.)

So, when a local retailer was blowing out 10″ Android tablets for half price, I thought, “Why not?” I had already done some research and discovered that VNC apps could be had on Android devices, and I’ve set up VNC servers on computers before. (It’s not hard, especially now that the installers handle the network security configuration for you.) In my mind, I wasn’t trying to do anything exotic.

And I was right. Once I had a wireless network in place and all the necessary software installed, getting a remote connection to my console machine was as smooth as butter. Right there, on my tablet, was a view of my mixing console. I could navigate around the screen and click on things. It all looked very promising.

There’s a big difference between basic interaction and really being able to work, though. When it all came down to it, I couldn’t easily do the substantive tasks that would make having a remote a handy thing. It didn’t take me long to realize that tuning monitors while standing on the deck was not something I’d be able to do in a professional way.

A Gooey GUI Problem

At the practical level, the problem I was having was an interface mismatch. That is, while my tablet could display the console interface, the tablet’s input methodology wasn’t compatible with the interface being displayed.

Now, what the heck does that mean?

Reaper (and lots of other audio-workstation interfaces) are built for high-precision pointing devices. You might not think of a mouse or trackball as “high precision,” but when you couple one of those input devices with the onscreen pointer, high precision is what you get. The business-end of the pointer is clearly visible, only a few pixels wide, and the “interactivity radius” of the pointer is only slightly larger. There is an immediately obvious and fine-grained discrimination between what the pointer is set to interact with, and what it isn’t. With this being the case, the software interface can use lots of small controls that are tightly packed.

Additionally, high-precision pointing allows for fast navigation across lots of screen area. If you have the pointer in one area of the screen and invoke, say, an EQ window that pops open in another area, it’s not hard to get over to that EQ window. You flick the mouse, your eye finds the pointer, you correct on the fly, and you very quickly have control localized to the new window. (There’s also the whole bonus of being able to see the entire screen at once.) With high-precision input being available, the workstation software can make heavy use of many independent windows.

Lastly, mice and other high-precision pointers have buttons that are decoupled from the “pointing” action. Barring some sort of failure, these buttons are very unambiguous. When the button is pressed, it’s very definitely pressed. Clicks and button holds are sharply delineated and easily parsed by both the machine and the user. The computer gets an electrical signal, and the user gets tactile feedback in their fingers that correlates with an audible “click” from the button. This unambiguous button input means that the software can leverage all kinds of fine-grained interactions between the pointer position and the button states. One of the most important of those interactions is the dragging of controls like faders and knobs.

So far so good?

The problem starts when an interface expecting high-precision pointing is displayed on a device that only supports low-precision pointing. Devices like phones and tablets that are operated by touch are low-precision.

Have you noticed that user interfaces for touch-oriented devices are filled with big buttons, “modal” elements that take over the screen, and expectations for “big” gestures? It’s because touch control is coarse. Compared to the razor-sharp focus of a mouse-driven pointer, a finger is incredibly clumsy. Your hand and finger block a huge portion of the screen, and your finger pad contacts a MASSIVE area of the control surface. Sure, the tablet might translate that contact into a single-pixel position, but that’s not immediately apparent (or practically useful) to the operator. The software can’t present you with a bunch of small subwindows, as the miniscule interface elements can’t be managed easily by the user. In addition, the only way for the touch-enabled device to know the cursor’s location is for you to touch the screen…but touch, by necessity, has to double as a “click.” Interactions that deal with both clicks and movement have to be forgiving and loosely parsed as a result.

Tablets don’t show big, widely spaced controls in a single window because it looks cool. They do it because it’s practical. When a tablet displays a remote interface that’s made for a high-precision input methodology, life gets rather difficult:

“Oh, you want to display a 1600 x 900, 21″ screen interface on a 1024 X 600, 10″ screen? That’s cool, I’ll just scale it down for you. What do you mean you can’t interact with it meaningfully now?”

“Oh, you want to open the EQ plugin window on channel two? Here you go. You can’t see it? Just swipe over to it. What do you mean you don’t know where it is?”

“Oh, you want to increase the send level to mix three from channel four? Nice! Just click and drag on that little knob. That’s not what you touched. That’s also not what you touched. Try zooming in. I’m zoomi- wait, you just clicked the mute on channel five. Okay, the knob’s big now. Click and drag. Wait…was that a single click, or a click and hold? I think that was…no. Okay, now you’re dragging. Now you’ve stopped. What do you mean, you didn’t intend to stop? You lifted your finger up a little. Try again.”

With an interface mismatch, everything IS doable…but it’s also VERY slow, and excruciatingly difficult compared to just walking back to the main console and handling it with the mouse. Muting or unmuting a channel is easy enough, but mixing monitors (and fighting feedback) requires swift, smooth control over lots of precision elements. If the interface doesn’t allow for that, you’re out of luck.

Control States VS. Pictures Of Controls

So, can systems be successfully operated by remotes that don’t use the same input methodology as the native interface?

Of course! That’s why traditional-surface digital consoles can be run from tablets now. The tablet interfaces are purpose-built, and involve “state” information about the main console’s controls. My remote-control solution didn’t include any of that. The barrier for me is that I was trying to use a general-purpose solution: VNC.

With VNC, the data transmitted over the network is not the state of the console’s controls. The data is a picture of the console’s controls only, with no control-state data involved.

That might seem confusing. You might be saying, “But there is data about the state of the controls! You can see where the faders are, and whether the mutes are pressed, and so on.”

Here’s the thing, though. You’re able to determine the state of the controls because you can interpret the picture. That determination you’ve made, however, is a reconstruction. You, as a human, might be seeing a picture of a fader at a certain level. Because that picture has a meaning that you can extract via pattern recognition, you can conceptualize that the fader is in a certain state – the state of being at some arbitrary level of gain. To the computer, though, that picture has no meaning in terms of where that fader is.

When my tablet connects to the console via VNC, and I make the motions to change a control’s state, my tablet is NOT sending information to the console about the control I’m changing. The tablet is merely saying “click at this screen position.” For example, if clicking at that screen position causes a channel’s mute to toggle, that’s great – but the only machine aware of that mute, or whether that mute is engaged or disengaged, is the console itself. The tablet itself is unaware. It’s up to me to look at the updated picture and decide what it all means…and that’s assuming that I even get an updated picture.

The cure to all of this is to build a touch-friendly interface which is aware of the state of the controls being operated. You can present the knobs, faders, and switches in whatever way you want, because the remote-control information only concerns where that control should be set. The knobs and faders sit in the right place, because the local device knows where they are supposed to be in relation to their control state. Besides solving the “interface mismatch” problem, this can also be LIGHT YEARS more efficient.

(Disclaimer: I am not intimately aware of the inner workings of VNC or any console-remote protocol. What follows are only conjectures, but they seem to be reasonable to me.)

Sending a stream of HD (or near HD) screenshots across a network means quite a lot of data. If you’re using jpeg-esque compression, you can crush each image down to 100 kilobytes and still have things be usable. VNC can be pretty choosy about what it updates, so let’s say you only need one full image every second. You won’t see meters move smoothly or anything like that, but that’s the price for keeping things manageable. The data rate is about 819 kbits/ second, plus the networking overhead (packet headers and other communication).

Now then. Let’s say we’ve got some remote-control software that handles all “look and feel” on the local device (say, a tablet). If you represent a channel as an 8-bit identifier, that means you can have up to 256 channels represented. You don’t need to actually update each channel all the time to simply get control. Data can just be sent as needed, of course. However, if you want to update the channel meters 30 times per second, that meter data (which could be another 8-bit value) has to be attached to each channel ID. So, 30 times a second, 256 8-bit identifiers get 8-bits of meter information data attached to each of them. Sixteen bits multiplied by 256 channels, multiplied by 30 updates/ second works out to about 123 kbits/ second.

Someone should check my math and logic, but if I’m right, nicely fluid metering across a boatload of channels is possible at less than 1/6th the data rate of “send me a screenshot” remote control. You just have to let the remote device handle the graphics locally.

Control-state changes are even easier. A channel with fader, mute, solo, pan, polarity, a five-selection routing matrix, and 10 send controls needs to have 20 “control IDs” available. A measly little 5-bit number can handle that (and more). If the fader can handle 157 “integer” levels (+12 dB to -143 dB and “-infinity”) with 10 fractional levels of .1 dB between each integer (1570 values total), then the fader position can be more than adequately represented by an 11-bit number. If you touch a fader and the software sends a control update every 100th of a second, then a channel ID, control ID, and fader position have to be sent 100 times per second. That’s 24 bits multiplied by 100, or 2.4 kbits/ second.

That’s trivial compared to sending screenshots across the network, and still almost trivial when compared to the “not actually fast” data rate required to update the meters all the time.

Again, let me be clear. I don’t actually know if this is how “control state” remote operation works. I don’t know how focused the programmers are on network data efficiency, or even if this would be a practical implementation. It seems plausible to me, though.

I’m rambling at this point, so let me tie all this up: Remote control is nifty, and you can get the basic appearance of remote control with a general purpose solution like VNC. If you really need to get work done in a critical environment, though, you need a purpose built solution that “plays nice” at both the local and remote ends.


My Interview On AMR

I was invited to do a radio show on AMR.fm! Here are some key bits.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

About a week ago, I was invited into “The Cat’s Den.” While that might sound like a place where a number of felines reside, it’s actually the show hosted by John, the owner of AMR.fm. We talked about a number of subjects related to local music and small venues. John was kind enough to make the show’s audio available to me, and I thought it would be nifty to chop it all up into topical segments.

The key word up there being “chop.”

That is, what you’re hearing in these files has been significantly edited. The whole thing was about two hours long, and there was a lot of “verbal processing” that occurred. That’s what happens during a live, long-form interview, but it’s not the best way to present the discussion afterwards. Even with having tightened up the key points of the show, I’ve taken pains to not misrepresent what either of us were getting at. The meaning of each bit should be fully intact, even if every sentence hasn’t been included.

So…

The Introduction

Supatroy

A quick reference to an earlier show that featured Supatroy Fillmore. (Supatroy has done a lot of work in our local music scene.)

Why The Computerization Of Live-Audio Is A Great Thing

Computerizing live-sound allows guys like me to do things that were previously much harder (or even impossible) to do.

How I Got Started

A little bit about my pro-audio beginnings…way back in high-school.

Building And Breaking Things

I’m not as “deep into the guts” of audio equipment as the folks who came before me. I give a quick shout-out to Tim Hollinger from The Floyd Show in this bit.

Functional Is 95%

A segment about why I’m pretty much satisfied by gear that simply passes signal in a predictable and “clean” way.

The Toughest Shows

The most challenging shows aren’t always the loudest shows. Also, the toughest shows can be the most fun. I use two “big production” bands as examples: Floyd Show and Juana Ghani. The question touches on an interview that I did with Trevor Hale.

I Worry Most About Monitor World

If something’s wrong in FOH, I can probably hear it. If something’s not quite right on the stage, it’s quite possible that I WON’T hear it – and that worries me.

Communication Between Bands And Audio Humans

I’m not as good at communicating with bands as I’d like to be. Also, I’m a big proponent of people politely (but very audibly) asking for what they need.

The Most Important Thing For Bands To Do

If a band doesn’t sound like a cohesive ensemble without the PA, there’s no guarantee that the PA and audio-human will be able to fix that.

Why Talk About Small-Venue Issues?

I believe that small-venue shows are the backbone of the live-music industry. As such, I think it’s worthwhile to talk about how to do those shows well.

Merchant Royal

John asks me about who’s come through Fats Grill and really grabbed my attention. I proceed to pretty much gush about how cool I think Merchant Royal is.

What Makes A Great Cover Tune?

In my opinion, doing a great job with a cover means getting the song to showcase your own band’s strengths. I also briefly mention that Luke Benson’s version of “You Can’t Always Get What You Want” actually gets me to like the song. (I don’t normally like that song.)

The Issues Of A Laser-Focused Audience

I’m convinced that most people only go to shows with their favorite bands in their favorite rooms. Folks that go to a bar or club “just to check out who’s playing” seem to be incredibly rare anymore. (Some of these very rare “scene supporting” people are John McCool and Brian Young of The Daylates, as well as Christian Coleman.) If a band is playing a room that the general public sees as a “venue” as opposed to a “hangout,” then the band isn’t being paid to play music. The band is being paid based on their ability to be an attraction.

Look – it’s complicated. Just listen to the audio.

Everybody Has Due Diligence

Bands and venues both need to promote shows. Venues also need to be a place where people are happy to go. When all that’s been done, pointing fingers and getting mad when the turnout is low isn’t a very productive thing.

Also: “Promoting more” simply doesn’t turn disinterested people into interested people – at least as far as I can tell.

Shout Outs

This bit is the wrap up, where I say thanks to everybody at Fats Grill for making the place happen. John and I also list off some of our favorite local acts.

 


Rusted Moose Live Broadcast

Check out live music from Utah at AMR.fm.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

When you leave a large land-based mammal out in the rain, you might just end up with a Rusted Moose. The stream is scheduled to begin at 7:00 PM, Utah local time (MST). The stream will be accessible through AMR.fm.

…and yes, we are definitely aware of the issues that cropped up with last week’s show. A live broadcast of a show that’s also live (to an actual audience in the room) is a thing with many moving parts, and we failed to nail down one of those moving parts. Specifically, we never positively determined what the broadcast feed was “listening” to – and wouldn’t you know, the feed was listening to the laptop’s built-in microphone.

Yowza.

I should write an article about all this sometime. 🙂


Why I Left SAC

I switched to Reaper from SAC because I wanted more flexibility to define my own workflow.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

If you know me, you know that I’m a HUGE fan of my custom-built digital console. It has routing flexibility like nothing else I’ve ever worked with, is far less subject to the whims of manufacturers, and generally lets me do things that are difficult or even impossible with other setups.

What you may not know is that I didn’t always use Reaper as the main software package. I started off with SAC. I was actually very happy with SAC for a while, but the “bloom came off the rose” after a few frustrations popped up.

Don’t Get Me Wrong! SAC Is Rad

I won’t lie. I’m going to be pretty tough on SAC in this article.

The point isn’t to bash the program though.

Software Audio Console is a really neat, purpose-built labor of love. If nothing else, it shows that a reliable, live-sound-capable console can be run on a general-purpose computing platform. It has some great features and concepts, not the least of which is the “separate monitor console for each performer” workflow. That feature, coupled with integrated remote-control support, can potentially be VERY killer for the tech that works for professional bands who carry their own production. (Set everybody up with a remote, let ’em mix their own monitors, you run FOH, and life is dandy. Well, until one of the players causes a massive feedback spike. Anyway…)

SAC is efficient. SAC’s overall control scheme is great for live-audio, most of the time. SAC is stable and trouble free. SAC has very usable snapshot management. Using ASIO4All as a separate driver, I was able to use SAC for live mixing and Reaper for recording, with Reaper effectively running in the background.

SAC is a good piece of software.

If there’s any problem with SAC, it’s that the program is overly influenced by its developer’s (Bob Lentini) personal preferences and workflow. If you want something markedly different, you’re out of luck.

It Started With An EQ

I’m a massive fan of Reaper’s native EQ plug. The only thing it’s missing is variable slope for the high and low pass filters. I honestly don’t know why anyone would want to buy an expensive, annoyingly copy-protected EQ plugin when Reaper’s EQ is so powerful.

Yup. I’m a bit of a fanboy. Not everyone may share my opinion.

Anyway.

Wanting to use Reaper’s EQ with SAC is what quickly revealed a “blind spot” with SAC’s workflow. I found out that adding FX to a channel was a bit clumsy. I also found out that manipulating FX on a channel was almost horrific.

To instantiate FX on a SAC channel, you have to find the FX control, click on it to get the channel FX chain to pop up, then use an un-filterable list of all available FX to find the one you want, click “Add,” and hope that you’ve gotten the chain order right.

If you didn’t get the order of the chain right, you have to de-instantiate one of the plugs and try again.

In Reaper, plugin instantiation can happen by clicking the insert stack, picking a plug from a filterable and customizable list, and…that’s it. If you got the plugin in the wrong spot, you can just drag it into the right one.

That may not seem like a huge difference, but the annoyance factor of SAC’s clumsiness accumulates greatly over time.

On the live-manipulation side, Reaper is leaps and bounds ahead. If I need to tweak an EQ on the fly (which happens every show, many times), all I have to do is click on the EQ plug in the stack. Immediately, the EQ pops its UI into view, and I can get to work.

In SAC, on the other hand, I have to (again) find the FX control, click to open the channel FX list, find the EQ, then double-click on it in the list to get the GUI to display. A few extra clicks might not seem like much, but this truly becomes a very awkward slog in a big hurry. In fairness, SAC does have a channel EQ that is VERY much more immediate, but what that ended up forcing me to do was to run my beloved plug as a “basic” EQ, and use the channel EQ for everything else. I’m not bothered by complexity, but unnecessary complexity IS something that I dislike.

There’s also SAC’s stubborn refusal to recognize that drag-and-drop is totally “a thing” now. In Reaper, I can drag plugins and sends between channels. In SAC, you can’t drag anything to any other channel. You can drag channels into a different order, but it’s not simple to do. (Some more unnecessary complexity). In general, dragging things around and multiselecting in Reaper works exactly as you would expect, whereas SAC tends to be VERY finicky about where your mouse cursor is and what modifier key you’re using.

Artificial Scarcity and Workflow Lock-In

In a number of ways, SAC aims to provide a “crossover experience” to techs who are used to physical consoles. This is absolutely fine if it’s what you want, but going this route has a tendency to reduce flexibility. This loss of flexibility mostly comes from arbitrary limitations.

Most of these limitations have enough cushion to not be a problem. SAC’s channel count is limited to 72, which should be WAY more than enough for most of us in small-venue situations. With a SAC-specific workflow, six aux sends and returns are a lot more than I usually need, as are 16 groups and eight outputs.

The problem, though, is that you’re forced to adopt the workflow. Want to use a workflow that would require more than six sends? Tough. Want to use more than eight physical outputs on a single console? Too bad.

Again, there’s no issue if you’re fine with being married to the intended use-strategy. However, if you’re like me, you may discover that having a whole bunch of limited-output-count subconsoles is unwieldy when compared to a single, essentially unlimited console. You might discover that much more immediate channel processing access trumps other considerations. It’s a matter of personal preference, and the thing with SAC is that your personal preference really isn’t a priority. The developer has chosen pretty much everything for you, and if that’s mostly (but not exactly) what you want, you just have to be willing to go along.

Another “sort-of” artificial scarcity issue with SAC is that it’s built on the idea that multi-core audio processing is either unnecessary or a bad idea. The developer has (at least in the past) been adamant that multi-thread scheduling and management adds too much overhead for it all to be worth it. I’m sure that this position is backed up with factual and practical considerations, but here’s my problem: Multi-core computers are here to stay, and they offer a ton of available horsepower. Simply choosing to ignore all that power strikes me as unhelpful. I have no doubt that some systems become unreliable when trying to multiprocess audio in a pseudo-realtime fashion – but I’d prefer to at least have the option to try. Reaper let’s me enable multiprocessing for audio, and then make my own decision. SAC does no such thing. (My guess is that the sheer force of multi-core systems can muscle past the scheduling issues, but that’s only a guess.)

Where artificial scarcity REALLY reared its head was when I decided to try migrating from my favorite EQ to the “outboard” EQ plug included with SAC. I was happily getting it instantiated in all the places I wanted it when, suddenly, a dialog box opened and informed me that no more instances were available.

The host machine wasn’t even close to being overloaded.

I may just be one of those “danged entitled young people,” but it doesn’t compute for me that I should have to buy a “pro” version of a plugin just to use more than an arbitrary number of instances. It’s included with the software! I’ve already paid for the right to use it, so what’s the problem with me using 32 instances instead of 24?

I’m sorry, but I don’t get it.

There’s also the whole issue that SAC doesn’t offer native functionality for recording. Sure, I understand that the focus of the program is live mixing. Like the EQ plugin, though, I get really put-off by the idea that I HAVE to use a special link that only works with SAW Studio (which is spendy, and has to be purchased separately) in order to get “native” recording functionality.

Push Comes To Shove

In the end, that last point above was what got me to go over to Reaper. I though, “If I’m going to run this whole program in the background anyway, why not try just using it for everything?”

The results have been wonderful. I’m able to access critical functionality – especially for plugins – much faster than I ever could in SAC. I can pretty much lay out my Reaper console in any way that makes sense to me. I can have as many sends as I please, and those sends can be returned into any channel I please. I can chain plugins with all kinds of unconventional signal flows. I can have as many physical outputs on one console as the rig can handle.

In Reaper, I have much more freedom to do things my own way, and I like that.

As I’ve said before, SAC gets a lot of things right. In fact, I’ve customized certain parts of Reaper to have a SAC-esque feel.

It’s just that Reaper has the edge in doing what I want to do, instead of just doing what a single developer thinks it should do.


A Guide: DMX, Computers, and LED Light Fixtures

A walkthrough for building a computer-controlled lighting system from the ground up.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Back in 2010, or thereabouts, I was finishing a Bachelor of Science in Information Technology. To actually graduate, I had to prepare a capstone project – a cross-disciplinary work that would show that I had actually internalized my coursework. I decided that I wanted to apply my knowledge to the process of building a DMX lighting computer with a remote.

I did actually build and test a prototype system. Indeed, a simpler system based on my project is what I use to drive the lighting rig at Fats Grill.

At the time, my perception of the project was just that it was a way to finish my degree. The ironic result of this was that the manual, which was clearly written as a document to help lighting techs do things, was never actually given to anyone who would do something with it.

That’s changing today.

I went through the project, corrected some things, removed some overly specific bits, and saved it as a PDF.

It’s completely free – Click here to download.