Direct access memory (DMA)

Direct access memory (DMA)

Direct access memory (DMA)

Host Disk Performance

  • DMA disk access
    The best thing you can do to ensure good disk performance is be certain Linux is using direct memory access (DMA) to access your disks.

DMA disk access is significantly faster than non-DMA modes, it imposes much less load on the CPU, and it holds up much better on a busy machine.

If you have SCSI disks, this is the only configuration you are likely to have, and you do not need to do anything.

If you have IDE disks, make certain that “Busmastering DMA” is enabled. As root, run hdparm. It provides information about disks in the following format:

# /sbin/hdparm /dev/hda
/dev/hda:
multcount    =  0 (off)
I/O support  =  0 (default 16-bit)
unmaskirq    =  0 (off)
using_dma    =  1 (on)
keepsettings =  0 (off)
nowerr       =  0 (off)
readonly     =  0 (off)
readahead    =  8 (on)
geometry     = 523/255/63, sectors = 8406720, start = 0

The using_dma line indicates whether or not DMA is enabled.

    • How to set up DMA
    • If you build your own kernel on server, you can enable the following two options:
o                            Generic PCI bus-master DMA support: CONFIG_BLK_DEV_IDEDMA
o                            Use DMA by default when available: CONFIG_IDEDMA_AUTO
    • You can use hdparm to enable DMA on a per-disk basis on a running system. For example, to enable DMA on the first IDE drive, become root (su), and run the following command:
o                            # /sbin/hdparm -d1 /dev/hda
o
o                            /dev/hda:
o                             setting using_dma to 1 (on)
o                             using_dma    =  1 (on)

If you use hdparm, you will need to run it every time you reboot your machine.

    • If you pass your kernel the options ide0=dma or ide1=dma at boot time, the kernel will automatically try to use DMA on IDE channel 0 or 1 respectively.
  • How to check Checking performance
    You can use hdparm to gather some fairly raw performance numbers about your disk speed and disk cache speed as well, using the following two options:

-t    perform device read timings
-T    perform cache read timings

The following listings show two sample runs, the first without DMA enabled and the second with DMA enabled.

# /sbin/hdparm -Tt /dev/hda
/dev/hda:
Timing buffer-cache reads:   128 MB in  1.23 seconds =104.07 MB/sec
Timing buffered disk reads:  64 MB in 14.40 seconds = 4.44 MB/sec
# /sbin/hdparm -Tt /dev/hda
/dev/hda:
Timing buffer-cache reads:   128 MB in  1.22 seconds =104.92 MB/sec
Timing buffered disk reads:  64 MB in  3.52 seconds =18.18 MB/sec

Host Display Performance

  • Accelerated X server
    For very best graphics performance, the first thing to do is make certain your card has an accelerated X server. Often the first release of “support” for a given chip will include a caveat that “only unaccelerated support is present,” and that unaccelerated support is often terribly slow.

In addition,make sure that you are running version 3.3.4 or later of XFree86. VMware has written and contributed an extension to the XFree86 DGA code that allows VMware Workstation accelerated performance in full-screen mode. All versions of XFree86 from 3.3.4 forward include the enhanced DGA support in the X servers for which DGA exists.

  • Color depth
    The higher your color depth, the more memory your screen requires. This often slows graphics operations. This is especially true with VMware Workstation, and we recommend you use a 16-bit color depth with your X server. The appearance is much better than 8-bit pseudo-colour. And compared to 32-bit color, the 16-bit mode requires VMware Workstation to use only half as much memory for the display.

Host Memory Performance

  • Memory use
    The critical thing about memory is making sure Linux detects it all. Older kernels often detect only the first 64MB of RAM in a machine unless that number is explicitly overridden. You can determine how much memory Linux detects by running free.
·                      % /usr/bin/free
·                                  total       used       free     shared    buffers     cached
·                      Mem:        127948     123880       4068      14016       2636      95016
·                      -/+ buffers/cache:      26228     101720
·                      Swap:       130748       6504     124244

In this example, the machine has 128MB of memory, all of which was detected by the kernel. If Linux does not detect all of the memory in your machine, become root (su) and edit /etc/lilo.conf to include the line

append="mem=128M"

Host Sound Performance

  • Sound configuration
    If the sound support in your kernel is built as modules, you can improve the efficiency of the modules’ operation.

It is unclear how much difference this makes in system performance, but at the very least it will prevent errors message from your sound modules saying they are unable to allocate DMA memory.

If you have a 2.2.x kernel, you can configure the sound modules to allocate their play buffer only once, then keep it, instead of freeing it and reallocating it as the sound card is used and freed. As root, you can edit /etc/conf.modules (or /etc/modules.conf, depending on your Linux distribution) and add the line

options sound.o dmabuf=1

Place this entry on the line immediately above the line that reads alias sound followed by the name of the driver for your sound card. You will need to unload and reload the sound module if you do this.

Sharing

Leave your comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.