AI?

There’s a lot of stuff we can automate, and it’s really cool even if it’s not really AI.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

 
Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

If you haven’t heard, there’s a new Midas Heritage-D console being released. (Video here.) It looks, to use academic parlance, “pretty rad, Dude.” In particular, it has tons of advertised I/O capability, which is exciting to me as a co-producer on a Pink Floyd tribute that is continually growing its mix-output count. Not that we can afford the $70,000 price tag for two tour-packs, but maybe there will be a spillover effect to new, affordable offerings down the line.

Anyway.

A feature being touted on the new desk is an AI assist for things like compression. Provide the system a set of directives on what you want, and your console works to create a compression solution on the channel which satisfies those aims. (It’s probably something like “fast attack relative to the program material, slow release relative to the same, use a low ratio, and try for an average gain reduction of 3dB.”)

Now, I don’t want to poo-poo something without a caveat. I think it’s possible that some sort of AI technology is involved…but my strong suspicion is that the “AI” label is mostly marketing. Do you remember when high-def TV was a new thing? And everything else started having “HD” added to the model numbers? My guess is that “AI” actually refers to an algorithm-driven automation system.

I mean, we’ve had program-dependent attack and release for a long time. They existed almost 20 years ago, when I was in school for audio, and were handled with both digital logic and analog circuits. This isn’t some sort of miraculous thing. Further, my slowly-clarifying definition of AI is that it’s a network of process nodes that can communicate with each other, and independently form a solution to a problem based on training data. You don’t need that for a well-defined process, like setting a compressor such that the peak gain reduction across some number “n” of gain reduction events averages out to, say, 6 dB. That’s just simple arithmetic and access to the threshold knob, when you look at it.

But that doesn’t make an algorithmic assist any less cool! The coolness is in the automation, utility, and the ability to save the EFFECT of a processing solution, rather than the CAUSE.

I mean, let’s say I have a preferred spectral content for vocals. Which I do. Definitely. And so do you, I’m sure. I can’t graph it, but I know it when I hear it.

Somebody starts singing into a mic, and what do you do? You grab the channel EQ and go to work, trying to make the tonality of the vocal match your preference. Upon success, we may save the curve as a preset. Later, we recall that curve on similar setups, hoping for a sound that’s essentially the same. This works because the whole signal chain greatly resembles our original solution: A person sings at some reasonable distance from a mic we’re familiar with, and that mic is connected to either a FOH PA we know, or one we’ve tuned to sound rather like what we know.

Even though the applied EQ curve on the channel is a cause (a change to a transfer function) rather than an effect (the actual spectral content of the channel), we can get close to a desired result when the other parameters are well controlled. Our results skew, though, when the other causes (microphone choice, singer distance, etc) no longer resemble what we worked with when we saved the channel EQ curve.

But what if we didn’t save the curve? What if we saved the effect of the EQ settings, which is a vocal channel that has a certain magnitude response when averaged over some specified time period? What if we could press a button on the console that allowed the internal software to compare the live, average magnitude curve of the channel with what we stored, and then  auto-adjust the channel EQ to seek that target? What if we could do that with with any EQ, including whatever is available on our outputs?

Do you see how powerful that could be?

There are algorithms available right now that can do this kind of work, it’s just that they’re not natively available to any consoles that I know of.

And hey, maybe Midas has done some AI development on pre-training a system to use parametric and graphic EQs in a reasonable way to accomplish such a task. AI doesn’t have to running live to be impressive.

So, anyway, whether there’s live AI running inside a new Heritage-D (or any other console) or no real AI at all, the ability to ask the console to help you match an outcome, rather than just a set of parameters, is a very nifty thing indeed.