Tag Archives: Science

Graphic Content

Transfer functions of various reasonable and unreasonable graphic EQ settings.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

An aphorism that I firmly believe goes like this: “If you can hear it, you can measure it.” Of course, there’s another twist to that – the one that reminds you that it’s possible to measure things you can’t hear.

The graphic equalizer, though still recognizable, is losing a bit of its commonality as an outboard device. With digital consoles invading en masse, making landings up and down the treasure-laden coasts of live audio, racks and racks of separate EQ devices are being virtualized inside computer-driven mix platforms. At the same time, hardware graphics are still a real thing that exists…and I would wager that most of us haven’t seen a transfer function of common uses (and abuses) of these units, which happen whether you’ve got a physical object or a digital representation of one.

So – let me dig up a spare Behringer Ultragraph Pro, and let’s graph a graphic. (An important note: Any measurement that you do is a measurement of EXACTLY that setup. Some parts of this exercise will be generally applicable, but please be aware that what we’re measuring is a specific Behringer EQ and not all graphic EQs in the world.)

The first thing to look at is the “flat” state. When you set the processing to “out,” is it really out?

In this case, very much so. The trace is laser flat, with +/- 0.2 dB of change across the entire audible spectrum. It’s indistinguishable from a “straight wire” measurement of my audio interface.

Now, we’ll allow audio to flow through the unit’s filtering, but with the high and low-pass filters swept to their maximums, and all the graph filters set to 0 dB.

The low and high-pass filters are still definitely having an effect in the audible range, though a minimal one. Half a decibel down at 45 Hz isn’t nothing, but it’s also pretty hard to hear.

What happens when the filters are swept to 75 Hz and 10 kHz?

The 3dB points are about where the labeling on the knobs tells you it should be (with a little bit of overshoot), and the filters roll off pretty gently (about 6 dB per octave).

Let’s sweep the filters out again, and make a small cut at 500 Hz.

Interestingly, the filter doesn’t seem to be located exactly where the faceplate says it should be – it’s about 40% of a third-octave space away from the indicated frequency center, if the trace is accurate in itself.

What if we drop the 500 Hz filter all the way down, and superimpose the new trace on the old one?

The filter might look a bit wider than what you expected, with easily measurable effects happening at a full octave below the selected frequency. Even so, that’s pretty selective compared to lots of wide-ranging, “ultra musical” EQ implementations you might run into.

What happens when we yank down two filters that are right next to each other?

There’s an interesting ripple between the cuts, amounting to a little bit less than 1 dB.

How about one of the classic graphic EQ abuses? Here’s a smiley-face curve:

Want to destroy all semblance of headroom in an audio system? It’s easy! Just kill the level of the frequency range that’s easiest to hear and most efficient to reproduce, then complain that the system has no power. No problem! :Rolls Eyes:

Here’s another EQ abuse, alternately called “Death To 100” or “I Was Too Cheap To Buy A Crossover:”

It could be worse, true, but…really? It’s not a true substitute for having the correct tool in the first place.


The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 3

Onward to the microphone preamp…or trim.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“Signals at mic-level may require large, positive gain changes to correctly drive downstream electronics, and so a jack that can be connected to a microphone preamp is needed in that case.”

Read the whole thing, free, at Schwilly Family Musicians.


But WHY Can’t You Fix Acoustics With EQ?

EQ is meant for fixing a different set problems.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

A distinct inevitability is that someone will be in a “tough room.” They will look at the vast array of equalization functionalities offered by modern, digital, sound-reinforcement tools, and they will say, “Why can’t I fix the space?”

Here’s the deal. A difficult room – that is, one with environmental attributes that make our job harder – is a time problem in the acoustical domain. EQ, on the other hand, is a tool for changing frequency magnitude in the electronic domain.

When it comes right down to it, bad acoustics is just shorthand for “A sound arrived at a listener multiple times, and it was unpleasant.” A noise was made, and part of its energy traveled directly to somebody’s ears. Some other part of its energy splattered off a wall, ceiling, or floor…or a combination of all of those, at least once, and then also arrived at somebody’s ears. Maybe a lot of the high-frequency information was absorbed, causing the combined result to be a muddy garble. Of course, all the transients getting smeared around didn’t help much, either. It gets even more fun when a sound is created, and bounces around, and some of it goes into a monitor system, and gets made again, and bounces around, and some of it goes to the FOH PA, and gets made AGAIN, and bounces around, and all of that gets back into the microphone, and so that sound gets generated again, except at a different level and frequency response, and…

How’s an EQ going to fix that? I mean, really fix it?

You might be able to dig out a hole in the system’s response that compensates for annoying frequency buildup. If the room causes a big, wide bump at 250 Hz, dialing that out of the PA in correct proportion will certainly help a bit. It’s a very reasonable thing to do, and we engage in such an exercise on a regular basis.

But all the EQ did was change the magnitude response of the PA. Sure, equalization uses time to precipitate frequency-dependent gain changes, but it doesn’t do a thing in relation to environmental time issues. The noise from the PA is still bouncing madly off of a myriad of surfaces. It’s still arriving at the listener multiple times. The transients are still smeared. The information in the electronic domain got turned into acoustical information (by necessity), and at that point, the EQ stopped having any direct influence at all.

You can’t use EQ to fix the problem. You can alleviate frequency-related effects of the acoustical nightmare you have on your hands, but an actual solution involves changing the behavior of the room. Your EQ is not inserted across the environment, nor can it be, so recognize what it can and can’t do.


What’s Next?

I don’t know, but we’re probably not going to blow the lid off of audio in general.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

I get extremely suspicious when somebody claims to have solved a fundamental problem in audio. Mostly, this is because all the basic gremlins have been thoroughly killed and dried. It’s also because sonic physics is a system of laws that tolerate zero BS. (When somebody claims that they have a breakthrough technology that sounds great by way of spraying sound like a leaky hose, I know they are full of something brown and stinky.)

Modern audio is what I would definitely call a mature technology. In mature technologies, the bedrock science of the technology’s behavior is very well understood. The apparent breakthroughs, then, come when another technology allows a top-shelf behavior to be made available to the masses, or when it creates an opportunity to make a theoretical idea a reality.

A great example is the two-way, fullrange loudspeaker. They’re better than they have ever been. Anyone who remembers wrestling Peavey SP2 TI boxes is almost tearfully grateful to have small, light, loud enclosures available for a rock-bottom price. Obviously, there have been advances. We’ve figured out how to make loudspeaker drivers more efficient and more reliable. Commercially viable neodymium magnets give us the same field strength for less mass. The constant-directivity horn (and its refined derivatives) have delivered improved pattern control.

These are important developments!

Yet, the unit, as an overall object, would be entirely recognizable to someone magically transported to us from three decades in the past. The rules are the same. You’ve got a cone driver in a box, and a compression driver mated to a horn. The cone driver has certain characteristics which the main box has to be built around. It’s not as though we’ve evolved to exotic, crystalline sound-emitters that work by magic.

The palpable improvements aren’t really to do with audio, in a direct sense. They have to do with miniaturization, computerization, and commoditization. An active loudspeaker in the 21st century is likely to sound better than a 1980s or 1990s unit, not because it’s a completely different technology, but because the manufacturer can design, test, tune, and package the product as a bundle of known (and very carefully controlled) quantities. When a manufacturer ships a passive loudspeaker, there’s a lot that they just can’t know – and can’t even do. Stuff everything into the enclosure, and the situation changes dramatically. You know exactly what the amplifier and the driver are going to do to each other. You know just exactly how much excursion that LF driver will endure, and you can limit the amplifier at exactly the point to get maximum performance without damage. You can use steeper crossover slopes to (maybe) cross that HF driver a little lower, improving linearity in the intelligibility zone. You can precisely line up the drivers in time. You can EQ the whole business within an inch of its life.

Again, that’s not because the basic idea got better. It’s because we can put high-speed computation and high-powered amplification in a small space, for (relatively) cheap. Everything I’ve described above has been possible to do for a long time. It’s just that it wasn’t feasible to package it for the masses. You either had to do it externally and expensively, shipping a large, complicated product package to an educated end user…or just let the customer do whatever, and hope for the best.

I can’t say that I have an angle on what the next big jump will be for audio. I’m even skeptical on whether there will be another major leap. I’m excited for more features to become more affordable, though, so I’ll keep looking for those gear catalogs to arrive in the mail.


The Pro-Audio Guide For People Who Know Nothing About Pro-Audio, Part 1

A series I’m starting on Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

From the article:

“The fundamental key to all audio production is that we MUST have sound information in the form of electricity. Certain instruments, like synthesizers and sample players don’t produce any actual sound at all; They go straight to producing electricity.

For actual sound, though, we have to perform a conversion, or “transduction.” Transduction, especially input transduction, is THE most important part of audio production. If the conversion from sound to electricity is poor, nothing happening down the line will be able to fully compensate.”


Read the whole thing here, for free!


EQ Propagation

The question of where to EQ is, of course, tied inextricably to what to EQ.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

On occasion, I get the opportunity to guest-lecture to live-sound students. When things go the way I want them to, the students get a chance to experience the dialing up of monitor world (or part of it). One of the inevitable and important questions that arises is, “Why did you reach for the channel EQ when you were solving that one problem, but then use the EQ across the bus for this other problem?”

I’ve been able to give good answers to those questions, but I’ve also wanted to offer better explanations. I think I’ve finally hit upon an elegant way to describe my decision making process in regards to which EQ I use to solve different problems. It turns out that everything comes down to the primary “propagation direction” that I want for a given EQ change:

Effectively speaking, equalization on an input propagates downstream to all outputs. Equalization on an output effectively propagates upstream to all inputs.


What I’ve just said is, admittedly, rather abstract. That being so, let’s take a look at it concretely.

Let’s say we’re in the process of dialing up monitor world. It’s one of those all-too-rare occasions where we get the chance to measure the output of our wedges and apply an appropriate tuning. That equalization is applied across the appropriate bus. What we’re trying to do is equalize the box itself, so we can get acoustical output that follows a “reference curve.” (I advocate for a flat reference curve, myself.)

It might seem counter-intuitive, but if we’re going to tune the wedge electronically, what we actually have to do is transform all of the INPUTS to the box. Changing the loudspeaker itself to get our preferred reference curve would be ideal, but also very difficult. So, we use an EQ across a system output to change all the signals traveling to the wedge, counteracting the filtering that the drivers and enclosure impose on whatever makes it to them. If the monitor is making everything too crisp (for example), the “output” EQ lets us effectively dial high-frequency information out of every input traveling to the wedge.

Now, we put the signal from a microphone into one of our wedges. It starts off sounding generally good, although the channel in question is a vocal and we can tell there’s too much energy in the deep, low-frequency area. To fix the problem, we apply equalization to the microphone’s channel – the input. We want the exact change we’ve made to apply to every monitor that the channel might be sent to, and EQ across an input effectively transforms all the outputs that signal might arrive at.

There’s certainly nothing to stop us from going to each output EQ and pulling down the LF, but:

1) If we have a lot of mixes to work with, that’s pretty tedious, even with copy and paste, and…

2) We’ve now pushed away from our desired reference curve for the wedges, potentially robbing desired low-end information from inputs that would benefit from it. A ton of bottom isn’t necessary for vocals on deck, but what if somebody wants bass guitar? Or kick?

It makes much more sense to make the change at the channel if we can.

This also applies to the mud and midrange feedback weirdness that tends to pile up as one channel gets routed to multiple monitors. The problems aren’t necessarily the result of individual wedges being tuned badly. Rather, they are the result of multiple tunings interacting in a way that’s “wrong” for one particular mic at one particular location. What we need, then, is to EQ our input. The change then propagates to all the outputs, creating an overall solution with relative ease (and, again, we haven’t carved up each individual monitor’s curve into something that sounds weird in the process).

The same idea applies to FOH. If the whole mix seems “out of whack,” then a change to the main EQ effectively tweaks all the inputs to fix the offending frequency range.

So, when it’s time to grab an EQ, think about which way you want your changes to flow. Changes to inputs flow to all the connected outputs. Changes to outputs flow to all connected inputs.


Livestreaming Is The New Taping – Here Are Some Helpful Hints For The Audio

An article for Schwilly Family Musicians.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

“The thing with taping or livestreaming is that the physics and logistics have not really changed. Sure, the delivery endpoints are different, especially with livestreaming being a whole bunch of intangible data being fired over the Internet, but how you get usable material is still the same. As such, here are some hints from the production-staff side for maximum effectiveness, at least as far as the sound is concerned…”


The rest is here. You can read it for free!


Hitting The Far Seats

A few solutions to the “even coverage” problem, as it relates to distance.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

This article, like the one before it, isn’t really “small venue” in nature. However, I think it’s good to spend time on audio concepts which small-venue folk might still run across. I’m certainly not “big-time,” but I still do the occasional show that involves more people and space. I (like you) really don’t need to get engaged with a detailed discussion regarding an enormous system that I probably won’t ever get my hands on, but the fundamentals of covering the people sitting in the back are still valuable tools.

This article is also very much a follow up to the piece linked above. Via that lens, you can view it as a discussion of what the viable options are for solving the difficulties I ran into.

So…

The way that you get “throw” to the farthest audience members is dependent upon the overall PA deployment strategy you’re using. Deployment strategies are dependent upon the gear in question being appropriate for that strategy, of course; You can’t choose to deploy a bunch of point-source boxes as a line-array and have it work out very well. (Some have tried. Some have thought it was okay. I don’t feel comfortable recommending it.)

Option 1: Single Arrival, “Point Source” Flavor

You can build a tall stack or hang an array with built-in, non-changeable angles, but both cases use the same idea: Any given audience member should really only hear one box (per side) at a time. Getting the kind of directivity necessary for that to be strictly true is quite a challenge at lower frequencies, so the ideal tends to not be reached. Nevertheless, this method remains viable.

I’ve termed this deployment flavor as “single arrival” because all sound essentially originates at the same distance from any given audience member. In other words, all the PA loudspeakers for each “side” are clustered as closely as is practical. The boxes meant to be heard up close are run at a significantly lower level than the boxes meant to cover the far-field. A person standing 50 feet from the stage might be hearing a loudspeaker making 120 dB SPL at 3 feet, whereas the patrons sitting 150 feet away would be hearing a different box – possibly stacked atop the first speaker – making 130 dB SPL at 3 feet. As such, the close-range listener is getting about 96 dB SPL, and the far-field audience member also hears a show at roughly 96 dB SPL.

This solution is relatively simple in some respects, though it requires the capability of “zone” tuning, as well as loudspeakers capable of high-output and high directivity. (You don’t want the up-close audience to get cooked by the loudspeaker that’s making a ton of noise for the long-distance people.)

Option 2: Single Arrival, Line-Array Flavor

As in the point source flavor, you have one array deployed “per side,” with each individual box as close to the other boxes as is achievable. The difference is that an honest-to-goodness line-array is meant to work by the audible combination of multiple loudspeakers. At very close distances, it may be possible to only truly hear a small part of the line, and this does help in keeping the nearby listeners from having their faces ripped off. However, the overall idea is to create a radiation pattern that resembles a section of a cylinder. (Perfect achievement of such a pattern isn’t really feasible.) This is in contrast to point-source systems, where the pattern tends towards a section of a sphere.

As is the case in many areas of life, everything comes down to surface area. A sphere’s surface area is 4*pi*radius^2, whereas the lateral surface area of a cylinder is 2*pi*radius*height. The perceived intensity of sound is the audible radiation spread across the surface area of the radiation geometry. More surface area means less intensity.

To keep the calculations manageable, I’ll have to simplify from sections of shapes to entire shapes. Even so, some comparisons can be made: At a distance of 150 feet, the sound power radiating in a spherical pattern is spread over a surface area of 282,743 square feet. For a 10-foot high cylinder, the surface area is 9424 square feet.

For the sphere, 4 watts of sound power (NOT electrical power!) means that a listener at the 150 foot radius gets a show that’s about 71 dB. For the cylinder, the listener at 100 feet should be getting about 86 dB. At the close-range distance of 50 feet, the cylindrical radiation pattern results in a sound level of 91 dB, whereas a spherical pattern gets 81 dB.

Putting aside for the moment that I’m assuming ideal and mathematically easy conditions, the line-array has a clear advantage in terms of consistency (level difference in the near and far fields) without a lot of work at tuning individual boxes. At the same time, it might not be quite as easily customizable as some point-source configurations, and a real line-source capable of rock-n-roll volume involves a good number of relatively expensive elements. Plus, a real line has to be flown, and with generous trim height as well.

Option 3: Multiple Arrival, Any Flavor

This is otherwise known as “delays.” At some convenient point away from the main PA system, a supplementary PA is set. The signal to that supplementary PA is made to be late, such that the far system aligns pleasingly with the sound from the main system. The hope is that most people will overwhelmingly hear one system over the other.

The point with this solution is to run everything more quietly and more evenly by making sure that no audience member is truly in the deep distance. If each PA only has to cover a distance of 75 feet, then an SPL of 90 dB at that distance requires 118 dB at 3 feet.

The upside to this approach is that the systems don’t have to individually be as powerful, nor do they strictly need to have high-directivity (although it’s quite helpful in keeping the two PA systems separate for the listeners behind the delays). The downside is that it requires more space and more rigging – whether actual rigging or just loudspeakers raised on poles, stacks, or platforms. Additionally, you have to deal with more signal and/ or power runs, possibly in difficult or high-traffic areas. It also requires careful tuning of the delay time to work properly, and even then, being behind or to the side of the delays causes the solution to be invalid. In such a condition where both systems are quite audible, the coherence of the reproduced audio suffers tremendously.


If I end up trying the Gallivan show again, I think I’ll go with delays. I don’t have the logistical resources to handle big, high-output point-source boxes or a real array. I can, on the other hand, find a way to boxes up on sticks with delay applied. I can’t say that I’m happy about the potential coherence issues, but everything in audio is a compromise in some way.


What Went Wrong At The Big Gig

Sometimes a show will really kick your butt.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

Do this type of work long enough, and there will come a certain day. On that day, you will think, “If just about half of this audience goes home being totally pissed at me, I’ll call that a win.”

For me, that day came last weekend.

I was handling a show out at the Gallivan Center, a large, outdoor event space in the heart of Salt Lake. The day started well (I didn’t have to fight for parking, and I had both a volunteer crew and my ultra-smart assistant to help me out), and actually ended on a pretty okay note (dancing and cheering), but I would like to have skipped over the middle part.

It all basically boils down to disappointing a large portion of an audience.

I’ve come to terms with the reality that I’m always going to disappoint someone. There will always be “THAT guy” in the crowd who wants the show to have one kind of sound, a sound that you’ve never prioritized (or a sound that you simply don’t want). That person is just going to have to deal – and interestingly, they are often NOT the person writing the checks, so there’s a certain safety in being unruffled by their kerfuffle. However, when a good number of people are in agreement that things just aren’t right, well, that can turn a gig into “40 miles of bad road.”

Disappointment is a case of mismatched expectations. The thing with a show is that a mismatch can happen very early…and then proceed to snowball.

For instance, someone might say to me: “You didn’t seriously expect to do The Gallivan with your mini-concert rig, did you?”

No, I did not expect that, and therein lies a major contributing factor. “Doing The Gallivan” means covering a spread-out crowd of 1500+ people with rock-n-roll volume. I am under no illusions as to my capability in that space (which is no capability at all). What I thought I was going to do was to hit a couple hundred merry-makers with acoustic folk, Bluegrass, and “Newgrass” tunes. I thought they’d be packed pretty closely together near the stage, with maybe the far end of the crowd being up on the second tier of lawn.

I suppose you can guess that’s not what happened.

For most of the night, the area in front of the stage was barely populated at all. I remembered that particular piece of the venue as being turf (back in the day), but now it’s a dancefloor. That meant that the patrons who wanted to sit – and that was the vast majority – basically started where I was at FOH. Effectively, this created a condition like what you would see at a larger festival, where the barricade might be 40 – 50 feet from the stage.

Now add to this that we had a pretty ample crowd, and that they ended about 150 feet away from the deck.

Also add in that a lot of what we were doing was “traditional,” or in other words, acoustic instruments that were miced. Folk and Bluegrass really are not that loud in the final analysis, which means that making them unnaturally loud in order to get “throw” from a single source is a difficult proposition.

Fifty feet out, there were points where I was lucky to make about 85 dB SPL C-weighted. After that, gain-before-feedback started to become a real conundrum. Now, imagine that you’re three times that distance, at where the lawn ends. That meant that all you got was about 75 dB C, which isn’t much to compete against traffic noise and conversations.

Things got louder later. The closing acts were acoustic-electric “Newgrass,” which meant I could make as much noise as the rig would give me. That would have gotten us music lovers to about 94 – 97 dB C at FOH (by my guess). The folks in the back, then, were just starting to hear home-stereo level noise.

In any case, I was complained at quite a bit (by my standards). I think I spent at least 50% of the show wanting to crawl into a hole and hide. That we had some feedback issues didn’t help…when you’re riding the ragged edge trying to make more volume, you sometimes fall off the surfboard. We also had some connectivity problems with the middle act that put us behind, and further aggravated my sense of not delivering a standout performance.

Like I said, there was some good news by the time we shut the power off. Even before then, too. The people who were getting the volume they wanted appeared to be enjoying themselves. Most of the bands seemed happy with how the sound worked out on the stage itself, and the audience as a whole was joyous enough at the end that I no longer felt the oppressive weight of imagining the crowd as a disgruntled gestalt entity. Still, I wasn’t going to win any awards for how everything turned out. I was smarting pretty badly during the strike and van pack.

But, you know, some of the most effective learning in life happens when you fall over and tear up your knees. I can certainly tell you what I think could be done to make the next go-around a bit more comfortable.

That will have to wait for the next installment, though.


The Great, Quantitative, Live-Mic Shootout

A tool to help figure out what (inexpensive) mic to buy.

Please Remember:

The opinions expressed are mine only. These opinions do not necessarily reflect anybody else’s opinions. I do not own, operate, manage, or represent any band, venue, or company that I talk about, unless explicitly noted.

Want to use this image for something else? Great! Click it for the link to a high-res or resolution-independent version.

See that link up there in the header?

It takes you to The Great, Quantitative, Live-Mic Shootout, just like this link does. (Courtesy of the Department of Redundancy Department.)

And that’s a big deal, because I’ve been thinking and dreaming about doing that very research project for the past four years. Yup! The Small Venue Survivalist is four years old now. Thanks to my Patreon supporters, past and present, for helping to make this idea a reality.

I invite you to go over and take a look.