The Apple Intel switch

So after having a couple of days to digest the big announcement, I thought I would fill you in on what this transition means from a scientists perspective and Macintosh users in general.

Right off the bat, I am going to tell you that for most users, this transition is not going to mean anything other than Apple will be able to build more new products that will meet customers needs and desires. For most users used to the interface of OS X, I will guarantee that one year from now, if I sat you down in front of an Intel based system running OS X and a PowerPC based system running OS X, you would not know the difference. The users who are going to know the difference are those who write code or are involved in extracting every last bit of performance through configuring and optimizing their systems particularly in parallel or cluster environments. In other words, most folks won’t know or care, except that this transition will mean more and better products from Apple.

If you are one of those individuals who do more than run programs already written for you, or if you simply want to know more about this transition and the history behind it then, good reader, read on.

When I read the Wall Street Journals article pronouncing the deal as fait accompli the day before the keynote, I did not completely buy it at first. I suspected or rather knew something was up, but letting that go for now, I presumed that there would be some hybrid deal and that the WSJ story was only part of the truth. Essentially, I felt that while it was technically feasible to switch the entire computer product line to x86 due to Apple’s maintenance of codebases for both PowerPC and x86, Apple in actuality would not be making any wholesale switches to Intel chips as there was simply too much invested in both marketing and developer relations. After all, Apple had spent years promoting and making not insignificant investments in Altivec. Rather I felt that there were a couple of possibilities behind this rumor including Intel producing Apple licensed PowerPC chips or x86 chips with Altivec extensions. Another possibility I thought was of Intel chips being used specifically in portable systems currently in development because of problems integrating the G5 into a portable environment. Or, what I really have been wanting and hoping for has been a new device like the Newton reborn that perhaps would be running a ARM based CPU from Intel. However, the more I thought about it of course, the latter two options would mean fragmenting the operating system architecture of Apple’s core, OS X which made me nervous (the embedded OS in the iPod notwithstanding) and the first option may not have been possible given the licensing relationships with IBM and Freescale.

So, when the WWDC announcement was made that Apple was in fact migrating everything to Intel, despite concluding that the WSJ, reputable source they were, might have in fact been correct, I was absolutely surprised. Given all of the hype that Apple made surrounding the G5 at its introduction a couple of years ago including having members of IBM up on stage made this hard to believe. There was great optimism surrounding the G5 announcement which led Steve Jobs to pronounce the availability of 3.0 Ghz G5 chips by the same time the following year which incidentally you may remember people talking about the Osborne effect back then. History has demonstrated that IBM was either unable or unwilling to produce higher performance chips and perhaps even more damaging, IBM was unable or unwilling to invest the resources in building a low power version of the chip suitable for portable applications in a time frame that made economic sense to Apple. Apple has been investing major resources into making the G5 work in a portable solution and it was simply not possible given the chips being made available to them. Therefore, in the absence of common platform architecture that could meet performance goals from both Freescale and IBM, in order to prevent fragmentation of the OS into a Microsoft like world with a “portable OS X”, a “desktop OS X” and a potential future “embedded OS X”, Apple had to bite the bullet and reconfigure their strategy to adopt a different CPU architecture. Think about that for a minute. In order to remain viable in the market place with this strategy, Apple would be trying to get developers to create multiple versions of their code for multiple operating systems. This is something that simply would not have been viable. More importantly, the customers who have come to expect a computing environment where everything simply works would have revolted against desktop applications separate from portable applications and even another application for the potential future ultra-portable devices. Because of these issues and the shift in the marketplace to laptops and other portable devices, Apple had to ensure the future of the platform around a single architecture, and I believe they made the right decision. The choice of Intel and what they are offering to Apple Computer given Apple’s strategy for media distribution in the face of Hollywood demands is what drove this decision.

Given that Intel is going to be the future of the Macintosh, how this will affect the scientific users of the platform that Apple has been carefully courting over the past four years? The most important issues will be ones of platform specific optimizations. I mentioned Altivec before and it is an issue that one suspects will be a sore spot for those individuals (including Apple’s own programmers) who have lots of effort invested in optimizing code for this instruction set. How good was Altivec? If you took the time to write code optimized for this instruction set for functions that could benefit, that code was up to eleven or more times faster than non-optimized code. Altivec is an impressive execution set and it was certainly fast enough to inspire many to carefully tune their code to take advantage of it, and the Fast Fourier Transform (FFT) functionality along with filters and image processing options available with Altivec was sweet. I should mention that it has not been just the scientific users that have seen the benefit of Altivec, as the common user has absolutely benefitted from Altivec, specifically in Quicktime, iMovie, Photoshop plugins, iTunes and others. Even gamers on the Macintosh platform have benefited as friends of mine in that community who have ported some of the biggest game hits have spent considerable resources integrating Altivec code in some of those programs. So, for those folks that have lots of hand coding time invested in Altivec, there is reason to be upset as the move to Intel will not use any of that code. Why optimize your code with things such as Altivec? For many scientific users, optimizing code to perform specific tasks, tune and optimize performance is often necessary and desirable to complete the tasks at hand and as noted before, the performance enhancements can be significant as in the acceleration of BLAST calculations of sequence relationships in genetic data. However, historically, boutique codebases, particularly vector extensions do not have a good record of long term preservation or integration into common usage. This history should inspire folks to make sure most of their code is easily portable. There are a number of platform specific optimizations out there for vector executions including MMX and MMX2 from Intel, VIS from Sun, MDMX from SGI, MDI from Digital Alpha and 3DNow! from AMD. Taking advantage of proprietary instruction sets require individual coding or calls to access them which often means hand coding and greater expenditures of effort. For those programmers who have significant numbers of calls to Altivec in their code, what are they to do? Well, for most code, you have a couple of options from Apple’s Universal Binary Programming Guidelines. Specifically this part, You can either 1) “Use the Accelerate framework. The Accelerate framework, introduced in Mac OS X v10.3 and expanded in v10.4, is a set of high-performance vector-accelerated libraries. It provides a layer of abstraction that lets you access vector-based code without needing to use vector instructions yourself or to be concerned with the architecture of the target machine. The system automatically invokes the appropriate instruction set. or 2) Port AltiVec code to the Intel instruction set architecture (ISA). The MMX?, SSE, SSE2, and SSE3 extensions provide analogous functionality to AltiVec. Like the AltiVec unit, these extensions are fixed-sized SIMD (Single Instruction Multiple Data) vector units, capable of a high degree of parallelism. Just as for AltiVec, code that is written to use the Intel ISA typically performs many times faster than scalar code”. With a few notable exceptions that do not have correlative instructions in MMX and SSE, most Altivec instructions have equivalent instructions in the SSE instruction set and is automatically managed if you use Apple’s vector-processing framework rather than specific Altivec calls. Additionally, Apple’s own engineers say that while Altivec gets the performance edge slightly, SSE is actually easier to implement. Once you have worked the algorithm out, which is the hard part, translating from Altivec to SSE or just writing SSE is fairly trivial.

Aside from the Altivec issues, most of the problems associated with bringing programs over from PowerPC to Intel will be issues related to bug tracking with big versus little endian related issues. The game people who have historically ported Windows games to Macintosh will be especially impacted by this, but once they get past having to support PowerPC, their jobs may actually get much easier.

What about other high performance computing options? I am not talking about the fps rates of Doom, but rather the raw compute performance ability of workstations and clusters. There are some types of data that do perform better on PowerPC, no question. That is what led many groups to invest lots of dollars into Xserve clusters. However, for many scientific users, Intel has also provided a very good price/performance ratio. It really depends upon the task, but regardless, Apple will be supporting the PowerPC for years to come, so any current investment in hardware is not going to be any more obsolete than would otherwise happen through time. In fact, the success of this transition will make Apple’s support of PowerPC hardware even more likely and if you paid attention to the previous link to the Xserve, you might glean something in Apple’s strategy from the URL. Additionally, IBM is far from done with PowerPC development and there are still some very exciting PowerPC based products coming to Apple in the next two years. Based upon Apple’s support for that architecture, there is no reason to avoid any of it. We are likely going to be purchasing a number of G5 workstations in the next couple of months for a project, but are going to keep in mind that any code developed will be easily transportable which is made especially nice with Apple’s freely distributed Xcode. Using the new version of Xcode, we can easily create universal binaries or when the time comes, recompiling for Intel should be as simple as selecting the “compile for Intel” radio button. However, even if we do not compile specifically for Intel, Apple has us covered through a relationship that began years ago. Peter van Cuylenburg back in 1992 was the President and COO of a company called NeXT Computer. (OS X is a direct descendent of the NeXTStep operating system). At any rate, Dr. Cuylenburg currently sits on the board of a company called Transitive and is doing what a board member should be doing, advising the company on potential strategic associations.

Software from Transitive is the basis of an Apple technology called Rosetta. Rosetta translates PowerPC instructions into x86 instructions on the fly which allows most current PowerPC applications to run on Intel hardware unmodified. Rosetta is absolutely amazing however, there are limitations to Rosetta. Specifically, Mac OS 7, 8, and 9 applications (MacOS Classic applications) will not run under Rosetta. Also applications that require G4 and G5 CPUs will not be supported. However, since Rosetta essentially emulates a G3 chip and tells applications requiring it that the system is such, many applications that benefit from a G4 will also run by defaulting to that codepath simply running without the performance advantages that tuning for those chips or Altivec required. Specifically these are the applications that cannot be run by Rosetta as listed by Apple.

– Applications built for Mac OS 8 or 9
–  Code written specifically for AltiVec
–  Code that inserts preferences in the System Preferences pane
–  Applications that require a G4 or G5 processor
–  Applications that depend on one or more kernel extensions
–  Kernel extensions
–  Bundled Java applications or Java applications with JNI libraries that can’t be translated”

Where the scientific community gets in trouble with Rosetta is all of the optimizations they have made for Altivec and the G4 and G5 chips. But remember that Apple is still going to be selling G5 chips for a couple of years still and will be supporting them for years to come after that. In light of the shift in architectures that had to occur, Apple realizes that they need to make this process as smooth as possible and is making considerable efforts towards providing a buffer while encouraging the developer community (including the scientific HPC community) to move forward, yet maintaining as much backwards compatibility as possible given the differences in the two chip architectures.

What will the Intel based Macintosh look like? The first systems out of the gate will be portable systems and smaller desktop systems with the PowerMac series to come later. As an aside, the transition system Apple is distributing to developers as the “Transition kit” is a Pentium 4 660 running at 3.6 Ghz. The chip itself supports 64-bit extensions, but Apple has not said anything about support of that functionality as of yet. Perhaps later today at the 64 bit WWDC sessions. Additionally the transition system uses DDR2-RAM at 533 Mhz which should provide some performance boost for memory dependent processes. Storage is via an SATA-2 interface and all of the graphics demos you can see in the WWDC sessions or at the WWDC keynote are running on Intel integrated graphics which support Quartz extreme. Drivers for video cards from ATI and Nvidia will be forthcoming and engineers from those companies have already been hard at work. The development box also has Firewire 400 and USB2, but is missing Firewire 800 currently. Open Firmware issues are being addressed and much of the functionality that Mac users require including Target disc mode and boot from “C” are being addressed. Currently even in the Phoenix BIOS currently on the development kit box, Netboot and booting from USB are present and given some of Intels newest technology, Active Management Technology allowing for on chip control of I/O, there may be all sorts of new functionality including emulation of Open Firmware functionality.

Remember though that we are not going to be seeing Pentium 4 chips in Macintosh systems. Apple is not interested in Intel because of their current chips. Rather they are interested in the future roadmap chips including higher performance portable chips and ultimately high performance desktop chips that will in two years time, take over for the PowerMac systems. Again, if Apple did not make the change now, in two years time, they would have been left out in the cold. Freescale (formerly Motorola) for years now has been chasing the embedded markets and is currently very interested in automotive applications. Laptop systems are low on their priority list. IBM recently got out of the personal computer business by selling off the entire PC line to Lenovo. They are focusing their chip efforts on their own server line and on selling to the game console market. What about the new fangled Cell chip that everybody is talking about? Apple looked at it and decided that there were simply too many issues related to making it effectivity work as a portable and desktop chip. Most significantly, the Cell chip has been designed as a chip for game consoles and lacks certain logic control functionality that would leave its performance lacking. On top of that porting code to this chip would have been much more work and Transitive does not currently support the Cell chip. They do however support x86. Many people are also asking why Apple did not look at AMD given the transition to x86. AMD certainly builds some nice chips and in fact, Apple has evaluated them. However, for portable applications and the DRM issues talked about before, AMD does not have what Apple needs. Additionally, there have been, and would continue to be supply issues with AMD, potentially leaving Apple in the same position they have been with IBM.

So, to sum up, Apple made the decision to preserve a common chip architecture across the laptop and desktop computer lines as well as allowing for potential new products to come. Mixed architectures for those environments would have been disastrous for Apple and would have caused widespread developer revolt. Additionally, to provide for new portable applications, something had to be done given the rising percentage of sales of portable versus desktop systems. The transition will not be seamless for developers and scientists, but Apple is doing what they can to minimize problems. However, for almost all end users, this is a good thing that will mean higher performance systems available in more configurations and solutions to meet your needs and desires. And most importantly for everybody using OS X, it will preserve our preference for computing platform into the future.


Leave a Reply

Your email address will not be published. Required fields are marked *