The Science of Sample Rates (When Higher Is Better — And When It Isn’t)

View Single Page

We are proud to begin re-posting some of the greatest hits from Trust Me, I’m a Scientist right here on SonicScoop for posterity. This story was originally published in TMIAS on February 4, 2013.

Courtesy of Flickr user mikecogh

Image courtesy of Flickr user mikecogh

One of the most hotly—and perhaps unnecessarily—debated topics in the world of audio is the one that surrounds digital sample rates.

It seems an unlikely topic for polarization, but for more than 10 years, the same tired arguments have been batted about by each side with almost unrelenting intensity.

At the fringes, advocates of either side have often dug deeper trenches of faith for themselves. But as much as that’s the case, there’s also a growing consensus among designers and users who have a firm understanding of digital audio.

Namely, that there are perfectly good reasons for sticking with the current professional and consumer standards of 44.1 and 48 kHz for recording and playback – and some valid arguments for moving up to slightly higher sample rates, such as 60, 88.2 or even as high as 96 kHz. What seems to have less informed support is the push to ultra-high sample rates like 192kHz.

We’ll explore the arguments on both sides of the major questions around sample rates and try to find out where each faction has got it right – and where they may be missing some crucial information.

How We Got Here

The mid-20th century was a heady time at Bell Laboratories. Just before its closing, it employed upward of 25,000 people, dedicated entirely to research and development.

Their innovations were enormous ones, and they lie at the root of the very device you are reading this on: The transistor, the laser, semi-conduction, the solar cell, television, C++ programming, the fax machine, and by the 1960s, the goddamn video phone.

Yes, this is what you think it is, and yes, they existed in the 1960s.

For the sake of contrast, Google, one of our greatest innovators of today, employs roughly 50,000 people across all of its departments, and it’s greatest offerings have been, well… a slightly improved version of the fax machine and the videophone.

In their heyday, researchers at Bell Labs earned 7 Nobel Prizes in total, and in 1960, the IEEE gave their “Medal of Honor” to Harry Nyquist, who had researched there for almost 40 years.

Back in the 1920s, the Yale graduate had worked on an early version of the fax machine. By 1947, he had made his most lasting contribution: a mathematical proof that showed any sound wave could be perfectly re-created so long as it was limited in bandwidth and sampled at a rate more than twice its own frequency.

In this case, practice sprung from theory. Nyquist’s Theorem set the groundwork for what would become digital audio. He had provided a mathematical proof that predicts a real law of the natural world. Much like with analog audio recording, the proof for digital audio existed on paper long before it became a reality.

Early Digital

Of course, it can sometimes take practice a while to catch up with theory.

It wasn’t until 1983 that popular and practical digital audio format was even introduced to the consumer market. But from its inception, the 16-bit/44.1kHz standard promised greater audio fidelity than vinyl or even magnetic tape. This is an established fact by any criteria that we can measure: frequency response, distortion, signal-to-noise, even practical dynamic range.

Of course, some of us still prefer the sound of older technologies, but when we do, it is not for the sake of transparency. Even the best older analog formats sound less like what we feed into them than a properly designed 16/44.1 converter. This can be confirmed by both measurement and unbiased listening.

But even though 16/44.1 was a theoretically sound format from the start, it took decades for it to reach the level of quality it has attained today – just as it had taken decades for Nyquist’s Theorem to lead to the creation of a viable consumer format in the first place.

Now in 2013, the 16/44.1 converter of a Mac laptop can have better specs and real sound quality than most professional converters from a generation ago, not to mention a cassette deck or a consumer turntable. There’s always room for improvement, but the question now is where and how much?

Improvements at 44.1: Fixing the Clock

There have been a few major improvements to basic converter technology over the years. They have come largely when subjective listeners and objective designers have shared common goals and a common purpose.

At first, digital converters lacked sufficiently accurate clocking, which could introduce significant “jitter”: time-based anomalies which show up in the signal as high-frequency distortion.

Upgrading the clocks on digital converters became a huge point of focus for some time. There was even a moment when external clock upgrades could provide significant benefits in many systems.

But that was then and this is now.Technology always advances and today, external clocking is far more likely to increase distortion and decrease accuracy when compared to a converter’s internal clock. In fact, the best you can hope for in buying a master clock for your studio is that it won’t degrade the accuracy of your converters as you use it to keep them all on the same page.

There are however, occasions when switching to an external clock can add time-based distortion and inaccuracies to a signal that some listeners may find pleasing. That’s a subjective choice, and anyone who prefers the sound of a less accurate external clock to a more accurate internal one is welcome to that preference.

This is a theme that we find will pop up again and again as we explore the issue of transparency, digital audio, sampling rates, and sound perception in general: Sometimes we do hear real, identifiable differences between rates and formats, even when those differences do not reveal greater accuracy or objectively “superior” sound.

Improvements at 44.1: Fixing the Filters

Clocking wasn’t the only essential improvement that could be made at the 44.1kHz sample rate.

The earliest digital converters lacked well-designed anti-aliasing filters, which are used to remove inaudible super-sonic frequencies and keep them from mucking up the signals that we can hear.

Anti-aliasing filters are a basic necessity that was predicted by the Nyquist Theorem decades ago. Go without them and you are dealing with a signal that is not bandwidth limited, which Nyquist clearly shows cannot be rendered properly. Start them too low and you lose a little bit of the extreme high-end of your frequency response. Make them too steep and you introduce ringing artifacts into the audible spectrum.

It’s a series of tradeoffs, but even at 44.1, we can deal with this challenge. Designers can oversample signals at the input stage of converter and improve the response of filters at that point. When this is done properly, it’s been proven again and again that even 44.1kHz can be completely transparent in all sorts of unbiased listening tests.

But that doesn’t mean that all converter companies keep up with what’s possible. Sometimes different sampling rates can and do sound significantly different within the same converter. But this is usually because of design flaws – purposeful or accidental – at one sampling rate or another. More on that in a minute.

When More is Better: Making The Filters Even Better

With all that said, there are a few places where higher samples rates can be a definite benefit.

Pages: 1 2 3 4Next Page ❯View Single Page

  • Felipe González Avalos

    Nice article, but it’s focused only in the PCM world, what about DSD and DXD where the sample rates can go up to 11 MHz?

    Cheers!

  • Justin C.

    Very different concept in that context Felipe, but agreed, we should certainly do a story on those formats some time. Thanks for the recommendation!