June 21, 2013

zlib performance improvements

In my blog entry on RHEL 6.4 I've mentioned that there have been performance enhancements for zlib compression. However I never got around to actually measure this until today.

I've taken the zlib test program called mingzip.c from the Red Hat zlib 1.2.3 and linked it dynamically against libz. Then I created a 2 GB data file by taring up /usr/share in the Red Hat file system three times. So it has quite some compressible text in it. Finally I ran five rounds of compression for this file on each RHEL 6.3 and RHEL 6.4 with default compression and "-9" maximum compression.

The result for the Red Hat update is a +13% throughput increase for the maximum compression and still a +8% throughput increase for normal compression.Your mileage may vary of course.

The same test comparing a SLES 11.2 with the new SLES 11.3 (which has an upgrade to zlib 1.2.7) shows a +25% throughput increase for the maximum compression and still a +13% throughput increase for normal compression.
This is a relative comparison: the reason the numbers are higher on SLES is that SLES 11.2 is significantly slower in this test than RHEL 6.3. The latest releases (6.4 and 11.3) are showing again about the same throughput.

Everyone using applications that dynamically link against zlib gets an improvement automatically. For applications who either ship their own version of zlib or statically link against it, the vendor needs to pick up the patch and put it into the next version for this improvement.

June 18, 2013

Porting to Linux on System z

The recently released overview paper "Porting applications to Linux on IBM System z" prompted a few questions about porting in principal that I'll try to answer here in this blog entry.
(updated 7/15/2014)

June 3, 2013

New white paper: HyperPAV setup with z/VM and Red Hat Linux on zSeries

Parallel Access Volume (PAV)  allow you on System z to have more than one I/O outstanding per volume. However it's not so easy to set up and maintain, so this is why there is HiperPAV, which is quite easy to install and maintain. And it's supported by all in service Linux distributions now.

The white paper is gone from the IBM site - so the link is no longer working.
The new white paper / howto "HyperPAV setup with z/VM and Red Hat Linux on zSeries" describes the step by step setup for HyperPAV for z/VM and zLinux. So if you are using ECKD disks and have any I/O performance problems - make sure you've implemented this.

As this white paper is removed - here are a few pointers to get you started:

The presentation "z/VM PAV and HyperPAV Support" and the z/VM HyperPAV web page has a good overview from the z/VM side and the presentation "HyperPAV and Large Volume Support for Linux on System z" shows the Linux part (which is basically working out of the box). And there is of course the Virtualization Workbook, which covers HyperPAV as well.

The whitepaper has been removed from the IBM site - (updated 05/30/2015)