To me, the highlight of today's x86 Systems Launch was not any individual server, but the focus on engineering complete systems of x86 clusters for Oracle and non Oracle workloads. The focus on engineering of complete systems, coupled with other trends in system architecture, will have profound changes on the way systems vendors design and customers purchase systems in the coming decade. Let me explain.
One of my favorite automobile companies, BMW, ran an advertising campaign a while back promoting the ability to configure to order your BMW from "a million possible combinations, give or take a nappa leather color option or two". That is actually great when you are selling cars, because at any given time one car is only being driven on one road by one driver, and there are many different types of drivers and roads. For many years, a similar design philosophy has been followed by x86 server vendors. The leading x86 vendors today offer a nearly endless combination of server form factors and options: 1 socket, 2 socket, 4 socket, 8 socket; rack mount, tower, blade; different I/O and memory capacities; and on an on. At one time, that made sense, as each server was typically purchased for a dedicated application and the endless options allowed an IT purchaser to configure and pay for only the features they needed. But unlike cars, the vast majority of x86 servers being purchased today are not serving a single user or running a single application.
With the widespread server consolidation enabled by virtualization technologies and the ever increasing power of multi-core CPUs, the vast majority of an organization's x86 compute demands can today be met with clusters made of up a single x86 server type. Cloud Computing providers like Amazon EC2 have recognized this for years as have High Performance Computing customers likeSandia National Labs. So why have system vendors continued to insist on gratuitously pumping out more and more x86 server models in every shape, size, and color? Well, if all you have to engineer is individual servers, then I guess you get creative. At Oracle, however, our x86 engineers have been busy designing complete x86 clusters to run Oracle and non Oracle workloads, and that has led to some of the design decisions exposed in today's launch.
If you had to build an x86 cluster to handle the broadest possible set of workloads, I'd definitely use the newSun Fire x4800. Powered by up to eight Intel Xeon 7500 series processors, one terabyte of memory, and eight hot swappable PCIe ExpressModules, this is the most powerful, expandable, and reliable of Oracle?s x86-based servers. Given that the PCIe Express Module standard was first announced by the PCI standards body in 2005, its amazing that five years later we don't see more vendors using this standard to provide hot swappable I/O cards for their servers. Sun first introduced PCIe ExpressModules in our Sun Blade family of blade servers several years ago and the Sun Fire x4800 now continues their use. If your systems vendor isn't using the PCIe Express Module standard for hot swap I/O and only offering proprietary hot-swap solutions, or worse yet, no hot-sway I/O cards, you might want to point them to the 2005 Announcement from the PCI SIG. Of course, if you are designing servers intended to be used as single standalone systems instead of in clusters, then perhaps a choice of bezel color is a more important option.
While I don't have time to discuss all of today's product introductions, one more that I did want to discuss is the new Sun Network 10GbE Switch 72p. Offering 72 10GbE ports in a single 1RU chassis, this switch is definitely designed for building clusters not single servers. While everyone seems to be hawking 10GbE switches these days, most so called "top of rack" switches only support 24 or 48 ports in a 1RU form factor. To replicate the full non-blocking fabric provided by the Sun Network 10GbE Switch 72p would require nine 24 port switches or five 48 port switches, up to 54 additional cables, 1/5 of a rack more space, and significantly more power. When used in conjunction with Oracle's Sun Blade 6000 24p 10GbE NEM, one can easily build non-blocking fabrics of up to 160 nodes or clusters of up to 720 nodes with oversubscription.
So hopefully that gives you a few ideas for building your next x86 cluster. With a lot of vendors, the ideas would stop after the hardware. On the software front, products like Oracle Weblogic 11g Application Server and MySQL Enterprise need no introduction and they require no modification to run on 10GbE clusters. But lets say you are are upgrading an older 2-socket, dual core x86 server to a new 2-socket, six core Sun Fire X4170 M2 Server. Do you really need to upgrade to 10GbE network or will your application run just fine on your existing 1GbE network? For starters, everything else being equal, if your old server ran a single application, with 3x as many cores, your new server, with sufficient memory and I/O, should be able to run at least 3 applications using Oracle VM virtualization software. Of course, one of the benefits of Oracle VM is not only server consolidation, but more flexible management. Even if your core applications run fine with 1 GbE, you could gain significant performance benefits with 10 GbE when you needed to move VMs off the server for planned maintenance, for load balancing, or unplanned server failures (using Oracle VM HA functionality).
Unlike a BMW, which is perhaps best enjoyed by itself on a deserted mountain road, Oracle's new x86 servers are designed to be used together in clusters, along with our high performance 10 GbE and InfiniBand switches, Oracle storage, and Oracle software. Engineered together from application to disk.
Software. Hardware. Complete.

Read More about [Highlights of Oracle's Next Generation x86 Systems Launch...