This article is from a FAQ concerning SCO operating
systems. While some of the information may be applicable to any OS,
or any Unix or Linux OS, it may be specific to SCO Xenix, Open
There is lots of Linux, Mac OS X and general Unix info elsewhere on
this site: Search this site is the best
way to find anything.
(Contributed by Bela Lubkin):
Well, TA #109419 has a few of the details. Its supposed fix of
disabling HPPS is rather peculiar...(*)
The most typical cause of vhand spinning is running out of
DMAable memory. OpenServer's kernel memory allocator distinguishes
between "DMAable" memory (meaning ISA-DMAable, that is, memory
below 16MB) and non-DMAable. In most modern systems, the only
device that could possibly need ISA DMAable memory is a floppy
drive. Other rarer users would include: ISA sound cards, Adaptec
154x host adapters, and old ISA QIC02/QIC36 tape adapters.
Unfortunately, it's _really_ easy for kernel code to mistakenly
request DMAable memory. You have to explicitly request non-DMAable,
or else your request is understood to be for DMAable memory. There
is a lot of code in the kernel and in third party drivers which
mistakenly requests DMAable. This doesn't show up in testing
because it's a "no-consequence" bug. Using DMAable memory doesn't
hurt the driver at all, it's just a waste.
Except... as overall system memory gets larger over time, people
are doing more with systems. The same drivers that used to
mistakenly allocate 512K of DMAable may now allocate 1MB, or 2MB...
Pretty soon you're completely out of that tiny 16MB window!
Each release of OpenServer has corrected some amount of code
that mistakenly requests DMAable. There was a major push to fix
these problems in OSR507, and it is almost completely clean of such
mistaken requests. (Some are rather hard to root out because
multiple drivers use a single memory allocation service that
doesn't give a way to specify DMA requirements, and _one_ of those
drivers actually does need DMAable, so they all have to accept
When DMAable memory is exhausted and someone requests more of
it, vhand starts spinning, looking for memory to use.
As an administrator, you don't have control over most uses of
DMAable memory. However, there is one large user that you _do_
control: the buffer cache. The kernel parameter PLOWBUFS controls
how many 1K disk buffers are allocated in DMAable address space.
You can see how many are currently being used by running `grep bufs
/usr/adm/messages` (this might produce a lot of output). On a 507
system that I checked, I get:
kernel: Hz = 100, i/o bufs = 12752k (high bufs = 11728k)
This system has about 13000 I/O buffers, 1024 of which are
DMAable (total minus "high"). You want almost all of your buffers
to be "high", which are fine for use by PCI host adapters and IDE
PLOWBUFS sets what percentage of total buffers should be
allocate below 16MB. If you have 20000 total buffers, setting
PLOWBUFS to 1 (its lowest setting) gets you 200 DMAable buffers,
which is only 1/80 of the total 16MB space. If you have default
parameters for 505, you probably have 6652 total buffers and about
2000 DMAable buffers, so you could save 1.8MB right there. That
might be enough to _never_ hit the problem, or might only push the
problem horizon out from 48 days to a year or so...
If you have non-default parameters then buffers might be
consuming much more of the low 16MB, and you could improve things
Starting with OpenServer 506, the PLOWBUFS parameter has a dual
meaning. Values <= 100 mean to allocate that percentage of total
buffers from DMAable memory. Values > 100 mean to allocate
exactly that many buffers. The machine I was looking at has
PLOWBUFS=1024, which is why it got exactly that many DMAable
buffers. That's the default setting in 507, chosen to allow floppy
drives and old ISA host adapters to work. In a system where the
only user is the floppy drive, it could probably be set to the
minimum, 101, without negative consequences. Setting it lower than
that wouldn't be useful -- if you're going to run out of 15.9MB
then you're going to run out of 16.0MB moments later.
Other things that you can change (including PLOWBUFS, so this is
a comprehensive remedy list):
- reduce PLOWBUFS to 101 (on 506 or later), or 1 (any release)
to sharply decrease the number of DMAable disk buffers
- edit /etc/conf/pack.d/str/space.c, change the value of
`str_pool_mem' from MEM_BUF to MEM_KVMAPPED (504 or 505: won't work
on 500 or 502; already changed on 506)
- remove NFS from the kernel if you are not using it -- it is a
sloppy waster of DMAable memory (fixed in 507)
- if you _are_ using NFS, edit /etc/conf/cf.d/mdevice, find all
"nfsd" entries, and make sure they have characteristics 'd' and
'P'. Be careful: the mdevice file cannot be reconstructed from
other files. If there is more than one "nfsd" line, they will look
different from each other; they're _supposed_ to look different.
Just add 'd' and 'P' to the third field of each entry, if they're
- if you're on 504 or earlier and nothing else has worked,
Note that PCI devices can DMA to any address in the system. The
word "DMAable" dates back to the ISA days. Reducing DMAable buffers
will not harm your modern I/O devices.
(*)Well, not all _that_ peculiar. HPPS was one of those drivers
which mistakenly requested DMAable memory. That was fixed starting
with 505, so disabling it would not help you if you are on that
Got something to add? Send me email.
Increase ad revenue 50-250% with Ezoic
More Articles by Bela Lubkin
© 2013-07-18 Bela Lubkin