I find myself surrounded by people that accept that 44100hz is the best sample rate to have their sound card on, as their music itself has a sample rate of 44100hz, so they are 'bit-matched' and therefore play with the clearest clarity. I am writing this tip today to debunk this myth, as I find it far from the truth, and I shall try to use examples to back up my reasoning.
For my examples, audio is generally recorded at 44100hz, which is 44100 samples per second. If we play back a 44100hz track at 44100hz, it's
44100 / 44100 = 1 = 100% accuracy
Using the lowest common multiple, we can deduce that a double frequency rate is also 100% accurate.
44100 / 88200 = 0.5 | (0.5 / 0.5) = 1 = 100% accuracy
Don't get me wrong, bit-matched playback is indeed the best possible way to play back your music (if your sound card supports it). As most sound cards sample and mix sound at 48000hz, a track at 44100hz is upsampled.
44100 / 48000 = 0.91875 = ~92% accuracy
It should make sense to use 44100 when DJing using Cross or Producer, right? But there's key points being missed here. You are not playing back your audio when DJing at exactly 44100hz per second, are you? If any of you use the pitch control at all, you'll know that your music is faster or slower than normal. You can assume that the track is a standard CD audio at a rate of 44100hz. When playing at 100% speed, pitch 0.0%, this is indeed, 44100 samples per second.
So let's say the track needs sped up a bit, after all we're beat-matching here, and I pop up the pitch control to +2.4%. We're now playing the track at 102.4%. You consider the rate was 44100 samples a second, however we've sped the track up now, so obviously the sample rate has sped up a bit.
44100 * 102.4% = 45158.4hz (samples per second)
The track is a bit faster, right? Instead of 44100 samples, we are pushing just under 45200 samples a second. Not a massive increase, but neither was the pitch, was it? let's say we increase the pitch of a track to +7.8%, which doesn't seem uncommon in some forms of electronic music.
44100 * 107.8% = 47539.8hz (samples per second)
Suddenly we're verging on 48000hz, which as we know is a normal sample rate we can choose on our cards. However we're arrogant and have set 44100hz as the output frequency. If you know anything about D/A conversion, this frequency clock is set in stone, and the audio is sampled at 41000hz and converted to analogue audio.
Our track has not been resampled, it is simply playing at a faster sample rate, which we'll mark down as 47500, and we are
RESAMPLING this to 44100. As you can see, we have reduced the effective sample rate and lowered the quality. By a 3439.8 sample decrease per second, the overall accuracy is 92.76%.
However, say we set the sample rate to 48000hz, and the same thing happened. It has been upsampled, by a percentage of roughly 1%, and we have a 460.2 sample increase, having an overall accuracy of
99.04%.
Ok, so if we look at the middle value, which is 46050hz, which is roughly 4.4% faster than 44100hz, you might say 'well if my pitch control doesn't go above 4.4% usually, surely 44100hz is the better sample rate?' I'll mention a few points here.
- High frequencies are what matters most at these differences in sample rates. I could send a sub-woofer audio at a resampled rate of 8000hz and you wouldn't notice hardly any difference compared to 44100hz.
- You're using timecode. Even worse, if you're using vinyls, the timecode fluctuates at a tiny rate and is not constant, so the 44100hz you're playing at pitch 0% will fluctuate up and down slightly.
- Audio processing happens. Gain control, internal equalizer, VST effects, all manipulate the audio in some form.
Anyway listener tests are better than figures. Resample a track to 66150hz or 48000hz and tell me you can tell a difference from 44100. Here's a key hint: When mixing in Mixvibes using 48000 compared to 44100, you should hear a marked increase in quality on the high frequencies when pitching up, and no difference at any other time.
Oversampling rather than undersampling is better for higher frequencies. With 44100hz you are almost always undersampling.
If your sound card supports 88200hz or 96000hz (double frequencies of the above) and your computer doesn't lag behind when using them, use them. As they double the sample rate, the difference in sample rates from the track to the output has twice the effective accuracy. It also halves the latency. Using the MacBook Pro's on-board audio I can manage 2.0ms latency with 96khz with a tiny blip every 30 seconds or so. 4.0ms and above I don't notice any blips at all.
If you're using line-in or thru mode as well, you're handling analogue audio there. No matter what rate you think anything is, it's real sound and a higher frequency for A/D/A conversion is gonna sound better no matter what. Especially for turntablists, there's a
considerable difference between 44100 and 48000 on a non-preamped phono input regarding high freqs.
I hope this has made it clearer for some people. Bit-matched playback is true if the recording is being fed directly to the sound interface, but with what DJs do to tracks these days, this is never the case.