This is a discussion on Arms race - Solaris Rss ; Congratulations to IBM for the new lead in TPC-C. IBM has once again the lead in a benchmark that got meaningless a few years ago. Just before you ask, i wrote the same when Oracle was first. But somehow it ...
Congratulations to IBM for the new lead in TPC-C. IBM has once again the lead in a benchmark that got meaningless a few years ago. Just before you ask, i wrote the same when Oracle was first. But somehow it looks like IBM wasn't able just to call it a day So they did a TPC-C benchmark run again and at the end they were able to yield 10,366,254 tpmC.
The result is indeed impressive. However there is are some key difference. The response times are vastly longer in the IBM result.
(in seconds)90th %AverageMaximumNew Order2.11.13724.041Payment2.11.13821.293Order-Status2.061.09520.169Delivery (interactive)1.640.74917.953Delivery (deferred)0.950.422.48Stock-Level2.081.11321.547Menu1.640.7723.037
Now look at the response times at the Oracle result.
in Seconds90 th %Avg.Max.New-Order0.1700.1685.885Payment0.1600.1565.758Order-Status0.1500.1505.433Delivery (Interactive)0.1200.1343.869Delivery (Deferred) 0.0400.0212.839Stock0.2100.1824.796Menu0.1200.1364 .474
With similar response times, the number of transaction is more in the range of 8,9 million as stated by the diagrams in the full disclosures.
This is the diagram from the IBM result:
Somewhere between 80% and 100% of the final result the reaction time explodes.
This is the matching result of the TPC-C result of Oracle:
There is no equivalent "explosion" in this result....
Furthermore i want to hint you on certain points in the configuration as stated by the full disclosure.
- This configuration has 48*380 GB SAS cards with battery backed write cache, thus 17,8 GiB battery protected write cache in total.
- The configuration used 3*224 SSDs summing up to 672 SSD. I would suspect that they use SSD with Sandforce 1500 (the same 177 GB like they used in the TPC-C that is documented at several places to be generated with Sandforce based SSD). This controller has an interesting capability. It's capable to do compress and some benchmarks have suggested, that the performance of this drive is quite different with compressible and less compressible data. It's would be an interesting point to research how compressible TPC-C data is.
- The SSD use MLC. The Sandforce Controller have some special mode of operation to enable the use of MLC to reach better durability, but this mode is based on compression, too. Interesting questions are: Are the SSD really capable to hold 3 years of TPC-C load due to the usage of MLC and what would be the impact on durability of less compressible data.
- The database is completely on the SSD. Just the database log is on disk. That's similar to the Oracle config
- The configuration of the database on the system is ... well ... interesting. They use a partitioned database and all this partition are bound to certain resource sets. So essentially they splitted the system in several ones, to be exact ... into 32 partitions per systems bound to a resource set each. With this amount it's ensured that all requests are CPU local factoring out the interconnect.
- The configuration doesn't provide any availability protection, but that's okay, because TPC-C doesn't mandate such. Just keep this in mind with the pricing and with the configuration. There are no mechanisms ensure availability and you can't transform it into an available configuration because the storage is direct attached to a single node (the storage is in the i/o drawers). The Oracle configuration is highly available by accident, as it uses shared storage and RAC
I think TPC-C is now really a corporate ego thing. We are in an arms race here. I'm really interested, how Oracle strikes back
Read More about [Arms race...