Archive for June, 2007

Performance comparison: Apache2 on Nevada65 with and without (kernel-based) Network Cache Accelerator

Wednesday, June 27th, 2007

Apache2.2.3 is running on Solaris Nevada build 65 on x86 (VMware emulated), has 0.5GB of RAM, and ~ 660MB of data in apache document root.

This web data is divided into 10000 1kbyte files and ~650 1Mbyte files. Logging during test was disabled for NCA and also for Apache (CommonLog commented out).

Load is generated by http_load downloaded from, running on VM hosting (same) machine – Linux based, url file is build by this script:

rm -f url
while [ $c -lt 10000 ]; do
echo$c >> url
yes=`expr $yes % 10`
if [[ $yes == 0 ]]; then
x=`expr $RANDOM % 649`
echo${x} >> url
fi #echo $c

Tripple run of http_load on apache2 without NCA:

vnull@xeno:~/inz/instalki/http_load-12mar2006$ !./http
./http_load -parallel 5 -seconds 10 url
947 fetches, 5 max parallel, 9.52494e+07 bytes, in 10 seconds
100580 mean bytes/connection
94.7 fetches/sec, 9.52494e+06 bytes/sec
msecs/connect: 0.811796 mean, 22 max, 0.248 min
msecs/first-response: 2.89326 mean, 142.79 max, 0.562 min
HTTP response codes:
code 200 — 947

vnull@xeno:~/inz/instalki/http_load-12mar2006$ ./http_load -parallel 5 -seconds10 url
1039 fetches, 5 max parallel, 9.84863e+07 bytes, in 10 seconds
94789.5 mean bytes/connection
103.9 fetches/sec, 9.84862e+06 bytes/sec
msecs/connect: 0.706046 mean, 23.07 max, 0.247 min
msecs/first-response: 3.09328 mean, 258.584 max, 0.564 min
HTTP response codes:
code 200 — 1039

vnull@xeno:~/inz/instalki/http_load-12mar2006$ ./http_load -parallel 5 -seconds10 url
959 fetches, 5 max parallel, 1.01547e+08 bytes, in 10 seconds
105888 mean bytes/connection
95.9 fetches/sec, 1.01547e+07 bytes/sec
msecs/connect: 0.824296 mean, 42.473 max, 0.248 min
msecs/first-response: 2.81876 mean, 231.154 max, 0.528 min
HTTP response codes:
code 200 — 959

Tripple run of http_load against apache2 WITH NCA:

vnull@xeno:~/inz/instalki/http_load-12mar2006$ ./http_load -parallel 5 -seconds10 url
1353 fetches, 5 max parallel, 1.41757e+08 bytes, in 10.0018 seconds
104773 mean bytes/connection
135.276 fetches/sec, 1.41732e+07 bytes/sec
msecs/connect: 4.6854 mean, 77.009 max, 0.399 min
msecs/first-response: 8.05471 mean, 77.009 max, 1.403 min
HTTP response codes:
code 200 — 1353

vnull@xeno:~/inz/instalki/http_load-12mar2006$ ./http_load -parallel 5 -seconds10 url
1494 fetches, 5 max parallel, 1.48097e+08 bytes, in 10.0006 seconds
99127.9 mean bytes/connection
149.391 fetches/sec, 1.48088e+07 bytes/sec
msecs/connect: 4.26223 mean, 57.807 max, 0.398 min
msecs/first-response: 7.23063 mean, 57.807 max, 0.583 min
HTTP response codes:
code 200 — 1494

vnull@xeno:~/inz/instalki/http_load-12mar2006$ ./http_load -parallel 5 -seconds10 url
1568 fetches, 5 max parallel, 1.51315e+08 bytes, in 10.0029 seconds
96502.2 mean bytes/connection
156.755 fetches/sec, 1.51272e+07 bytes/sec
msecs/connect: 3.97283 mean, 207.755 max, 0.398 min
msecs/first-response: 6.71551 mean, 207.755 max, 0.586 min
HTTP response codes:
code 200 — 1568

NCA: (157+149+135)/3 = ~147 r/s
without NCA: (95+104+96)/3 = ~98 r/s

NCA gave ~50% boost for free in this test scenario!

Note that /dev/nca’s nca_max_cache_size was set to 2048 to cache files only up to 2kB - using ndd.

Solaris Network Cache & Accelerator – does it work or not .. ?

Wednesday, June 27th, 2007

Determining whether Network Cache Accelerator works on Solaris…

First ensure that you have enabled everything you need in /etc/nca/*

Check apache2 that it is NOT listening IPV6!
Simple test:
-bash-3.00# grep ^Listen /etc/apache2/httpd.conf

Should be enough

Next place the following into /usr/apache2/bin/apachectl just before the end of the configuration section:

# Enable NCA:
if [ -f $NCAKMODCONF ]; then
if [ "x$status" = "xenabled" ]; then
HTTPD=”env LD_PRELOAD=/usr/lib/ $HTTPD”

Reboot – yay, this shouldn’t happen on a UNIX box ;) should be preloaded by LD_PRELOAD:
-bash-3.00# pldd `pgrep http`|grep ncad

Also you can check by pargs -e if LD_PRELOAD is set properly.

Hardcore way of determining whether NCA works:
-bash-3.00# truss -ff -t accept,listen,bind /usr/apache2/bin/apachectl start
703: bind(256, 0x08047C90, 16, SOV_SOCKBSD) = 0
703: listen(256, 8192, SOV_DEFAULT) = 0
707: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
709: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
711: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
713: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
715: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
<run eg. GET http://<ip>/123.html>
713: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) = 11
713: accept(256, 0x081A5268, 0x081A5254, SOV_DEFAULT) (sleeping...)
<run second time, no output should happen from accept() as request is served by kernel!>

pfiles `pgrep http`|grep AF will show you that listening socket is AF_INET type, not AF_NCA! This is odd!

It seems that on Solaris Nevada truss -v is broken (doesn’t display parameters in details?? On Solaris 10 it works..

Also it seems that sotruss -f -T output differs from Solaris 10 to Nevada – smells like second bug? It seems it doesn’t show calls to bind() from ?

Securing OpenLDAP – userPassword issue

Tuesday, June 26th, 2007

Unsecured OpenLDAP (slapd) server…

Output from Solaris 10 box:

-bash-3.00# ldaplist -l passwd test5
dn: uid=test5,ou=People,dc=lab1
uid: test5
cn: Johnny Doe
homeDirectory: /export/home/test5
userPassword: {MD5}DMF1ucDxtqgxw5niaXcmYQ==

After adding following snippet to OpenLDAP’s slapd.conf file we are preventing anyone from viewing user password(including Solaris LDAP proxy bind, excluding logging in user and admin/Manager of slapd):

access to attrs=userPassword,shadowLastChange
by dn="cn=admin,dc=lab1" write
by anonymous auth
by self write
by * read

-bash-3.00# ldaplist -l passwd test5
dn: uid=test5,ou=People,dc=lab1
uid: test5
cn: Johnny Doe
gecos: Johnny Doe,none,0,1,Johnny Doe
homeDirectory: /export/home/test5

AIX – first battle

Tuesday, June 19th, 2007

At UnixDays conference I’ve talked with many people, they fascinated me with 
AIX - so I’ve bought  one RS/6000 machine ;) But as everyone knows – single system is borring. Maybe in next year I’ll build home HACMP laboratory… :]

Yesterday my first RS/6000 (7046-B50, CHRP, PPC 375 1MB L2 cache, rack) server appeared in my house. Then it started(keep in mind that this was my first look at this OS):

0) It seems that right serial com parameteres are 9600 8N1 for SMS

1) AIX was not installed, the firmware was outdated, chaning default boot order in SMS

2) trying to install AIX5.3 (with outdated firmware) – yeah, I know that this won’t work, but wanted to check what happens ;)

3) step “2″ failed ;)

4) updating firmware to the lastest one, reseting SMS to defaults, having lots of fun with OpenFirmware (getting familiar with this damn PCI logical tree) — and finally I’ve learned that space for O/F makes big diffrence, eg.

0> " scsi" select-dev ok

1> show-children ok

5) installing AIX again

6) ouch, some packages are missing on CD, google says that this is normal behaviour… hm (strictly speaking missing “devices.pci.ethernet” makes bad feelings…)

7)  the install halted after displaying missing packages stage? power off, power on, back to section 5

8) left install running on ”missing packages” stage

9)  finally AIX boots from SCSI drive! yeah! but why the heck install_assistant is not running? why i can’t login via console login?

10) … time passess … while having fun with rescue mode from CD and rootvg imported from internal SCSI id=2 drive ;) I love reading man pages… especially without less(1).

11) google says that I should be using real null-modem cable for serial login (full DTR,DSR & CD swapped). Hm, cisco null-modem  doesn’t work either. Damn, damn IBM!

12) back to 10

13) as i said - go back to 10 :)

14) TERM=vt100 smitty chdev seems not remeber RUNTIME & LOGIN console parameteres even without making them instant(just saving in OBM). Maybe it is altering CD’s OBM one ?

15) go back to 10

16) it seems that the sollution is to put some little script (i’ve called it / – remember to make it chmod +x !!) with persistent (-P) option to chdev for tty0 (adding clocal to RUNTIME & LOCAL tty0 settings, I’ve also put there chdev line to enable login on this damn tty0 console). I’ve also setup a root password for sure (don’t know is this necessary).

17) sync; sync; sync; reboot (this time test from real SCSI install)


19) # install_assistant - piece of cake, even if this is my first install…

20) everything would be great if my 100mbps IBM PCI ethernet card (internal one, this box has two of them) wouldn’t drop 30-60% packets. It seems it has auto-negotiated 100mbps FDX (info from switch, smitty, lsattr, …).

21) Having fun: altering TRANSMIT/RECIVE HW queues on ent0… nothing happens (same drops with ping -i 0.1 from linux box)

22) Temporary solution: forcing ent0 to become 10mbps HD.

23) Ha! Now I can telnet in without problems (without “connection reset by peer” type erros)
Screenshot from AIX (after telneting in)

24) Move this loud box from front of me to it’s own place: TO DEXTER’S LABORATORIES ( => basement ). Now, much better… I hate sound of SCSI drives :/

Playing with long distance NFS replication/Disaster Recovery protection using Sun StorageTek Availability Suite

Thursday, June 7th, 2007

“Sun StorageTek Availability Suite, or AVS for short, is an OpenSolaris Community project that provides two filter drivers; Remote Mirror Copy & Point in Time Copy, a filter-driver framework, and an extensive collection of supporting software and utilities.

The Remote Mirror Copy and Point in Time Copy software allows volumes and/or their snapshots, to be replicated between physically separated servers in real time, or by point-in-time, over virtually unlimited distances. Replicated volumes can be used for tape and disk backup, off-host data processing, disaster recovery solutions, content distribution, and numerous other volume based processing tasks.”

Today I configured the following scenario:

DR for NFS servers

Both avs1 and avs2 nodes are running OpenSolaris Nevada build 65. It works great! Each of the nodes is running ZFS mirror on two disks. Also the AVS bitmaps should be RAID protected (for example using SVM). After the DR switch:

AVS was commercial project, but Sun decided to release it for free as a Open Source project, so enjoy!

Everyone likes screenshots, here you have one from initial sync:


links for 05/06/07

Tuesday, June 5th, 2007

J2EE performance tips

Some J2EE Performance Tips

Sun StorageTek Availability Suite 4.0 Software Installation and Configuration Guide

Sun StorageTek Availability Suite 4.0 Remote Mirror Software Administration Guide

Sun Cluster and Sun StorageTek Availability Suite 4.0 Software Integration Guide

… evolution doesn’t take prisoners …