Today's professional audio market uses chips made by a handful of digital signal processing (DSP) manufacturers. The most-used chips are made, in alphabetical order, by Analog Devices, Intel, Motorola, Texas Instruments and Yamaha. Over the past three decades, DSP chips have developed from low capacity chips to the advanced 32-bit and higher bitrate systems used in today's processors and mixers, with manufacturers constantly improving performance. This performance is generally indicated by three properties: DSP power, Audio quality and Sound quality.
DSP power is measured in ‘Floating Point Operations per Second’ - or FLOPS. Soon after the first chips entered the market, FLOPS were measured in millions - hence the prefix ‘mega’ was added to measure DSP power in MFLOPS.
DSP chip technology has evolved with so-called Moore’s Law (the observation that the number of transistors in a dense integrated circuit doubles approximately every two years) so, just as with Intel and AMD computer chips, soon Mega was replaced by Giga, with one GFLOPS representing one billion floating point operations per second.
In the world of audio DSP, two types of DSP chip are available: dedicated DSP and the Field Programmable Gate Array (FPGA). Both do the same thing; the difference is in the programming method. In most cases dedicated DSP chips and FPGA chips are used simultaneously - each performing the tasks at which they are most efficient. Nowadays, the processing power of commercially-available DSP and FPGA chips is huge - enough to serve a small mixer with multiple DSP ‘plug-in’ algorithms. For larger systems, multiple chips can be combined. Recent developments in networking technology (e.g. Dante, AES67) support DSP systems from different manufacturers to be combined to scale up to virtually infinite DSP power.
Audio quality is defined as ‘the accuracy of the representation of audio signals flowing through the DSP chip’. In a digital system, the audio quality is determined by the bit rate (measured In bits) and sample rate (measured in thousands of Hertz, or KHz). This started off in the early 1980s with 8-bit and 44.1kHz, soon moving to 16-bit (‘CD quality’), then settling at a 32-bit data architecture for today’s mainstream professional systems. This is not because it’s impossible to go higher - in fact most DSP chips use a higher internal bit rate to do more complex algorithms at 32-bit quality, but because 32 bits covers the dynamic range of human hearing and provides enough additional headroom for live mixing. More bits in the data architecture would fall outside the threshold of human hearing. Then there’s the sample rate, which Sony and Philips established at 44.1kHz for the recording market in the 1980s, due to the limitations of the compact disc (CD) format, and 48kHz for the broadcast market for compatibility with video systems. The professional audio market adopted the 48kHz broadcast standard, simply because it was slightly better then the recording standard, and gave enough bandwidth and timing accuracy for live sound reinforcement. For high-end recording, 96kHz is often used to support listening through very high quality (studio monitor) speakers, but down-sampled to 48kHz or 44.1kHz again for mass market sound reproduction such as CD, DVD and MPEG.
Sound quality is defined as ‘the increase in pleasantness or appropriateness of a listening experience’ as a result of using the DSP. Consider a very powerful DSP system with all DSP functions switched off, so the audio signal flows through the DSP system without being affected. In that case, there is no ‘sound quality’ as the signal at the output is identical to the signal at the input. Only when a DSP algorithm is switched on - e.g. an equaliser - the sound quality changes. The conclusion is that the sound quality depends purely on the algorithms, not on the DSP hardware. Note that in professional audio applications where a sound engineer operates the DSP system, the user interface has a huge influence on the sound engineer’s decisions - so not only the algorithm itself, but also its user interface determines the sound quality.
To summarise: Today we have access to ‘all you can eat’ DSP power, provided by very powerful DSP chips or through combining DSP systems. There is no difference in Audio quality between DSP chips. The difference in Sound quality between DSP systems is not caused by DSP chips, but by DSP algorithms and their user interfaces. So, the question in the title of this micro tutorial was a trick one: with the algorithm and user interface given, all DSP chips ‘sound’ equal.
If you would like to go deeper into the topic of audio quality in DSP systems, check out the further reading materials below, or sign up to one of our YCATS Yamaha Commercial Audio Seminars. You can find the European schedule on www.yamahaproaudio.com/training
Next week's micro tutorial will be about networking technology: ‘The five most important factors in applying an audio network’.
Quality In Networked Audio Systems
‘Performance And Response’: AES137 conference e-Brief