I'm not willing to drop the cache on the vmware Cloud: it's too busy with paying clients/customers to jeopardise anything.
I'm on thin ice with a big client just now due to sporadic "white page" issues. Grr.
Have dropped the caches on both xen VMs (they are the ones that are closest in spec.) and below are the subsequent figures, immediately afterwards - eNlight being the second one:
Now we have a point of reference.
Now for disc mappings, which is rather interesting to note the differences:
I haven't been told what the underlying disc structure is on 'Cloud #2' but it is apparent that eNlight uses LVM of some sort. Not logging access time should theoretically speed disc writes, if nothing else but has other implications. Having /boot as ext3 is a waste of time, IMO, though once the server is underway, should have little bearing.
[Will PM the eNlight login details (and 'Cloud #2' if wished) - both VMs are used for live sites, albeit not "crucial". By their nature they serve up different traffic at present and one is 'forced' to use FCGI (yuk!) until its software is migrated/updated.]
The saga continues...
EJ

Have dropped the caches on both xen VMs (they are the ones that are closest in spec.) and below are the subsequent figures, immediately afterwards - eNlight being the second one:
Code:
total used free shared buffers cached Mem: 524288 502076 22212 0 860 74452 -/+ buffers/cache: 426764 97524 Swap: 522104 77588 444516 Total: 1046392 579664 466728 --------------------//----------------------- total used free shared buffers cached Mem: 524288 468668 55620 0 316 21908 -/+ buffers/cache: 446444 77844 Swap: 2129912 307208 1822704 Total: 2654200 775876 1878324
Now for disc mappings, which is rather interesting to note the differences:
Code:
/dev/xvda3 on / type ext3 (rw,noatime,usrquota) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/xvda1 on /boot type ext2 (rw,noatime) tmpfs on /dev/shm type tmpfs (rw,noexec,nosuid) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0) /tmp on /var/tmp type none (rw,noexec,nosuid,bind) ----------------------//------------------------ /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw,usrquota) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/xvda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw,noexec,nosuid) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0) /tmp on /var/tmp type none (rw,noexec,nosuid,bind)
[Will PM the eNlight login details (and 'Cloud #2' if wished) - both VMs are used for live sites, albeit not "crucial". By their nature they serve up different traffic at present and one is 'forced' to use FCGI (yuk!) until its software is migrated/updated.]
The saga continues...
EJ
Comment