Hey guys and gals. For this blog post, I thought it would be of particular interest to some to have a look at latency and audio interfaces. As you can see in the picture above, I pulled out my Focusrite Forte for this alongside the RME ADI-2 Pro FS.
Latency is simply the delay for something to happen after an instruction has been issued for that event. Computers, device drivers, the hardware are not instructed "real-time" down to a bit by bit timeframe, but rather act on "chunks" of data generally issued with some buffer to keep the pipeline flowing. The buffering mechanism is a major, but not the whole, factor in the latency effect.
There is a balance to be struck though. The larger the buffer, potentially the more latency, but the less likely we could run into issues with dropouts. The shorter the buffer, the lower the potential latency, but the more demand on the CPU to deal with making sure the buffer doesn't go empty, and likelihood that we could have audio errors if needs are not met on time. This idea of "strain" on the CPU is particularly relevant on digital audio workstations (DAWs) when all kinds of DSP like VST plugins are used in audio production.
For us audiophiles, we generally don't care too much about latency because normally we're just interested in the sound quality once we "press play" and so long as the audio starts reasonably quickly and is not disrupted during playback, then there's nothing to complain about. Latency by itself has no effect on sound quality (despite the claims of some, which we'll address later).