In my blog entry on RHEL 6.4 I've mentioned that there have been performance enhancements for zlib compression. However I never got around to actually measure this until today.
I've taken the zlib test program called mingzip.c from the Red Hat zlib 1.2.3 and linked it dynamically against libz. Then I created a 2 GB data file by taring up /usr/share in the Red Hat file system three times. So it has quite some compressible text in it. Finally I ran five rounds of compression for this file on each RHEL 6.3 and RHEL 6.4 with default compression and "-9" maximum compression.
The result for the Red Hat update is a +13% throughput increase for the maximum compression and still a +8% throughput increase for normal compression.Your mileage may vary of course.
The same test comparing a SLES 11.2 with the new SLES 11.3 (which has an upgrade to zlib 1.2.7) shows a +25% throughput increase for the maximum compression and still a +13% throughput increase for normal compression.
This is a relative comparison: the reason the numbers are higher on SLES is that SLES 11.2 is significantly slower in this test than RHEL 6.3. The latest releases (6.4 and 11.3) are showing again about the same throughput.
Everyone using applications that dynamically link against zlib gets an improvement automatically. For applications who either ship their own version of zlib or statically link against it, the vendor needs to pick up the patch and put it into the next version for this improvement.
June 21, 2013
June 18, 2013
Porting to Linux on System z
The recently released overview paper "Porting applications to Linux on IBM System z" prompted a few questions about porting in principal that I'll try to answer here in this blog entry.
- The developerWorks article "Porting applications to Linux for System z" provides more technical details, especially on the differences between 31bit and 64bit and of course the endianess problem. So before you start porting your C/C++ program take a look at this. The migration toolkit can be found here and here.
- Open Source packages not yet integrated in a distribution are usually pretty easy. Most of the time a simple ./configure, make, make install sequence does the job. You probably need to install a few development libraries before. Sometimes small changes to the make file are required to recognize s390x as a big-endian 64 bit architecture. Usually you can take a look at the ppc64 implementation.
- Also for Open Source projects there is the possibility to get access to a System z using the Community Development System for Linux on System z (CDSL).
- Java workloads usually run just out of the box.
- Don't forget to download and install the latest service pack though.
- Due to the 31bit addressing the heap of the 31bit JVM on System z can't be as large as the heap on a 32bit JVM. So for e.g. a 3 GB heap you need to switch to the 64 bit JVM on zLinux. If you then use the -Xcompressedrefs option you can keep the additional memory reasonable.
- For ISVs there is at IBM Partnerworld a special roadmap called "Porting your UNIX or Linux on x86 solution to Linux for IBM System z mainframe server platforms".
- Also hosted on Partnerworld is the "IBM Systems Application Advantage for Linux (Chiphopper)". Which is an offering from IBM to help porting ISV applications.
- IBM als runs Porting Centers around the world for professional help. Also available is the IBM Migration Factory.
- For Open Source projects there is the Community Development System for Linux on System z (CDSL). Register there to get access to a System z.
June 3, 2013
New white paper: HyperPAV setup with z/VM and Red Hat Linux on zSeries
Parallel Access Volume (PAV) allow you on System z to have more than one I/O outstanding per volume. However it's not so easy to set up and maintain, so this is why there is HiperPAV, which is quite easy to install and maintain. And it's supported by all in service Linux distributions now.
The white paper is gone from the IBM site - so the link is no longer working.
The new white paper / howto "HyperPAV setup with z/VM and Red Hat Linux on zSeries" describes the step by step setup for HyperPAV for z/VM and zLinux. So if you are using ECKD disks and have any I/O performance problems - make sure you've implemented this.
As this white paper is removed - here are a few pointers to get you started:
The presentation "z/VM PAV and HyperPAV Support" and the z/VM HyperPAV web page has a good overview from the z/VM side and the presentation "HyperPAV and Large Volume Support for Linux on System z" shows the Linux part (which is basically working out of the box). And there is of course the Virtualization Workbook, which covers HyperPAV as well.
The whitepaper has been removed from the IBM site - (updated 05/30/2015)
The white paper is gone from the IBM site - so the link is no longer working.
As this white paper is removed - here are a few pointers to get you started:
The presentation "z/VM PAV and HyperPAV Support" and the z/VM HyperPAV web page has a good overview from the z/VM side and the presentation "HyperPAV and Large Volume Support for Linux on System z" shows the Linux part (which is basically working out of the box). And there is of course the Virtualization Workbook, which covers HyperPAV as well.
The whitepaper has been removed from the IBM site - (updated 05/30/2015)