[ILUG] Bigger pipes...
kenn at linux.ie
Wed Apr 25 21:24:24 IST 2001
On Wed, Apr 25, 2001 at 07:34:13PM +0100, Padraig Brady wrote:
> Kenn Humborg wrote:
> > And, in fact, mbuffer does the job nicely. The 40secs is currently
> > reduced to 24secs. I use a dd to pull across NFS which feeds
> > a nice big mbuffer process, which feeds gzip.
> > Thanks, Padraig!
> > BTW, Kevin, I already adjusted rsize & wsize to 8K.
> > Later,
> > Kenn
> why use dd? Doesn't that introduce another data copy?
> couldn't you use mbuffer -i /images/test.img ?
Because I didn't spot that option. I'll try it. However,
I have found that the block size on the dd affects performance,
so I'll probably stick with it, just to make the NFS reads
easier to tune.
> Also completely guessing, mbuffer supports mmap which
> might get rid of the output data copy also?
I tried the mmap option and it crawled.
I'm not worried about memory-to-memory copies at all. I'm
getting 2MB/s over NFS (on a quiet 100MB network - more on this
later). I can decompressing at about 2MB/s (PPro 200MHz).
Best disk write throughput is about 6MB/s.
So, as long as I can keep everything happening in parallel, then
there's no point in tuning any more. Memory copying is a _very_ small
part of these timings.
Regarding NFS throughput, I see it transfer a chunk of data (maybe
about 1.5 secs) then pause (maybe 0.5 to 1.0 secs) and then get
more data. The server is not disk bound during these pauses
(and, besides, the whole file is in server RAM at this stage).
I don't know if it's client-side, server-side or in the NIC
drivers (Netgear FA311s, based on the newer NatSemi chipset,
not the Tulip, and using Netgear's drivers). This is on an
otherwise-quiet network segment.
Right now, I'm happy enough with this (bottleneck is gzip on the
CPU). But later, I'll be pulling different images across to multiple
machines simultaneously, so I'd like to eventually improve the
NFS throughput, since NFS will become the bottleneck again.
More information about the ILUG