In the evolutionary cycle of computing technology, the music industry is right on the cusp of something huge. 2006 promises to be a pivotal year in the computer-upgrade arena as it ushers in several next-big-thing technologies, including 64-bit CPU architectures, dual-core processors, dual dual-cores, support for ludicrous amounts of physical RAM, new OS and applications to take advantage of all these goodies and more.
Teeming with the equivalence of Darwinian genetic mutation, computers are about to take the kind of quantum leap in processing power and system design that only comes about once every decade or longer. But what does it all mean for you and your music? How will digital audio applications benefit today, tomorrow or five years from now? Do bigger numbers merely promise “more” and “faster,” or will they actually improve audio quality and the sound of your mixes?
Apple's PowerPC processor was probably the first 64-bit design to gain any kind of commercial success in the personal desktop market. Housed within every Power Mac G5 manufactured since 2003, the chip has made a fantastic foothold regardless that many installed operating systems and software are not yet taking full advantage of its power. But on the PC front, unless you're a compulsive hardware geek or bleeding-edge early adopter, chances are pretty good that what currently lies under your computer's hood is a CPU based on some iteration of decade-old 32-bit hardware technology. Although clock speeds have steadily increased throughout the years, very little has been done in the way of improving overall PC processor architecture and data busing since the mid-1990s — until now, that is.
During the past two years, several major advances have been made in PC processor designs, with both Intel and AMD battling for early supremacy in the 64-bit space. Known within the industry as x64 CPU architecture, support for 64-bit word lengths at the processor level comes as an extension to the current 32-bit ×86 architecture. Both chip giants came out slugging early with ×64 designs, but Intel stepped into the limelight with its hugely popular Itanium chip targeted mainly at enterprise and server applications.
You see, 64-bit technology has been slow to catch on with desktop users — not so much for cost (they're actually less expensive than many flagship models of the past), but because consumers typically wait for hardware and software vendors to align before making the jump. At the time, nobody was doing much to educate the consumer on the benefits of 64-bit computing. And without consumer interest for 64-bit product, OS and application vendors weren't in any hurry to partner and invest time or money, so complacency and status quo snarled x64's momentum all around.
A few key industry alliances and proofs of concept later, both consumers and the media seem to be jumping on the bandwagon, and now definitely appears to be the time for 64-bit computing to shine. The benefits are pretty straightforward: First, a 64-bit CPU architecture boasts deeper and wider data registers than x86 technology, processing data in packets that are twice the size of what a 32-bit CPU can handle. Of the different ways that data is stored and retrieved on a CPU from a system's various forms of memory, registers are closest to the CPU (whereas cache, RAM and disk are farthest, in increasing order) and therefore the fastest accessible data storage place. A Pentium 4 — class CPU has only eight 32-bit general-purpose registers and eight floating-point registers. With the introduction of the ×64 CPU architecture, this is doubled to 16 general-purpose registers instead of eight, and each register is now 64 bits wide. Furthermore, the number of floating-point registers is also doubled to 16. The bottom line is that using 64 bits delivers more data per clock cycle, helping systems to run faster and more efficiently, which translates into better performance.
The second, and equally key to boosting performance, benefit is ×64's ability to address significantly more RAM than today's 32-bit chips, which are limited to 4 GB of RAM split between the OS and applications (practically limited to about 3 GB in Windows). A 64-bit architecture extends this practical limit to a binary-swirling 1,024 GB, or 1 terabyte (TB), of accessible RAM. This helps applications run faster when working with extremely large data sets by loading them directly into memory, bypassing the need for slow-poke virtual memory access or read-from-disk cycles.
The direct benefit of this for a DAW environment, of course, is that more of your session's data — be they audio tracks, sample loops, plug-in instances, virtual instruments or real-time background processes such as time shifting and so forth — can occur and reside entirely within RAM, allowing the software to have quicker access and uninterrupted computation of the data. A practical example of this would be the ability to keep a significantly larger pool of loops in memory, allowing for all sorts of new and exciting ways to process an entire song's worth of audio in real time. Likewise, you could store exceedingly large sample sets in RAM with the ability to access more of these sets simultaneously, never having to stream off disk again. Experts also feel more RAM access will inspire a whole new generation of complex audio resynthesis and sample-based synthesizers that can be fed real-time audio and respond immediately to live input.
Granted, with memory module sizes and prices where they are today, few 64-bit PCs will have that much memory, but applications will be designed with the potential to access all of it. As the demand increases in years to come, RAM prices will surely come down, and the ability to address a terabyte of physical memory will become extremely attractive.
None of this extra power is relevant unless every component in your system is updated to work with it. To benefit the most, you must upgrade not only your computer to a 64-bit CPU but also the OS, applications, audio drivers and plug-ins. True performance gains and enhanced system-wide functionality cannot be had from one without the others. On the OS front, Microsoft answered 64-bit wishes early — Windows XP Professional ×64 Edition has been on store shelves for some time now — as did Apple with Mac OS X Tiger, released early in 2005. Both operating systems are extremely well-equipped for pro audio, sporting mature and stable kernels supporting symmetric multiprocessing. With the proper chip in place, users who want to be the first to venture into 64-bit computing can do so immediately, and both provide backward-compatibility support for legacy 32-bit applications.
Finding apps that fully support 64-bit processing is a little trickier, however. Cakewalk recently released Sonar 5, the first truly 64-bit parallel-processing DAW, which is reportedly enjoying performance gains in the 20 to 30 percent range over the company's previous 32-bit technology. “This kind of performance gain is huge,” says Cakewalk Chief Technology Officer Ron Kuper. “If you have a 3GHz processor, a 30 percent performance boost makes it feel more like a 4GHz processor. When was the last time you got a whole GHz for free?”
PC owners, in particular, looking to make the 64-bit jump in 2006 are waiting with bated breath for the next major Windows release, Windows Vista (previously code-named Longhorn), which promises to bring some additional benefits to DAWs. “Microsoft has redesigned the audio-driver stack so that less work is done in kernel mode, which equals increased stability, meaning no more blue screens!” Kuper says. “It will also have new kernel features to allow DAWs to run more smoothly, such as ultra-high-priority threads and the ability to page-lock more RAM.”
Dual-Core And More
Intel first experimented with the notion of multicore chips when it introduced Hyper-Threading Technology about three years ago. This allowed the operating system to allocate resources to unused execution units in a single-core processor, effectively fooling the OS into assuming it was running on a dual-processor system. With CPUs essentially at the ceiling of their clock speeds now, though, true parallel computing provides the next major bump in speed — at least for the foreseeable future.
Parallel computing can come in several forms, but most desktop users today will experience it as either dual-processor (two separate physical chips), dual-core (essentially two identical processors on the same chip) or a combination of both. For technical reasons, working with cores directly has the benefit of being slightly faster than slinging chips together over a bus. But as Apple has most recently demonstrated with its new Power Mac G5 Quad, you can successfully chain together two dual-core 64-bit processors for a total of eight double-precision, floating-point units per computer, along with four velocity engines. Yes, this is the kind of power graduation you can look forward to!
Intel has been shipping dual-core Pentium processors for nearly a year at many different price levels, some of which support Hyper-Threading Technology, enabling four threads. The company expects a very fast dual-core ramp during the next few years and will also deliver four-cores in the not-too-distant future. But what beyond that?
“Looking further out over the next decade, we continue to see valued usage models that will benefit from even greater degrees of computer parallelism,” says Dan Snyder, PR manager at Intel Corporation. “One way to continue to deliver higher levels of performance from parallelism is by continuing to add more cores to future CPUs.”
To address this, Intel has already invested heavily in an R&D infrastructure intended to enable these future platforms by innovating past the hardware and software challenges that come into play when moving to tens and even hundreds of parallel cores. “We have the research programs in place to investigate these issues, so if, in the future, we decide to build CPUs with many cores, we have the ability to do so,” Snyder says. And with Apple migrating from its current PowerPC processors to Intel architecture in 2006, both PC and Mac platforms are on a level playing field for the first time.
Another slant on multicore processing came when Apple introduced the Distributed Audio Processing concept in Logic Pro 7. With it, many users began to consider the economics of buying multiple G5s and linking them in a network to create a powerhouse that could easily topple even the most expanded Digidesign Pro Tools|HD setup. Whether through this networked approach or integrated multicore/multi-CPU systems, multiprocessing isn't a feature that universally benefits every category of software application. Fortunately for musicians and producers, DAWs stand to gain a great deal from it, as they can break down their workload into parallel subtasks.
In a Cakewalk Technology White Paper presented at AES in October 2005, Kuper noted that Sonar 5 enjoys substantial performance gains, upward of 50 percent, when going from a single-core to a dual-core configuration. “Note that parallel processing in a DAW does not come automatically,” Kuper cautions. “A DAW must be carefully designed to support parallelism, which means it must carefully break its workload down into smaller parts. Also, it must carefully manage memory and resource contention between different threads and processors. Finally, it should be designed for general scalability across any number of processors, not only two or four.” This is an excerpt from the following article: Electronic Musician The Future of DAW Computing.
To stay informed about new articles, be sure to click here to sign up for the DigiFreq Music Technology Newsletter. It's free!