[ILUG] Bigger pipes...
padraig at antefacto.com
Wed Apr 25 16:48:04 IST 2001
Maybe you could buffer the data internally in dd? to allow
gzip to saturate the CPU, like?: dd if=/images/test.img
ibs=20000 | gzip -dc > /dev/null
There are other tools to possibly use like on freshmeat:
dog, buf, cpipe, piper,... ?
Kenn Humborg wrote:
> I'm trying to read a compressed disk image from an
> NFS server and write it to a local disk:
> gunzip --to-stdout < /images/test.img > /dev/sda1
> Let's take the disk writes out of the equation:
> gunzip --to-stdout < /images/test.img > /dev/null
> This takes about 40 secs wall time and about 20 secs
> user CPU. Just pulling it across the network:
> cat /images/test.img > /dev/null
> takes about 20 secs.
> So, it looks like gunzip can't get it's input quickly
> enough to saturate the CPU. The NFS reads and gzip's
> uncompression are not happening in parallel.
> My thinking is that if I could put a pipe with a BIG
> buffer in between, I could do something like:
> cat /images/test.img | gunzip --to-stdout > /dev/null
> If this pipe was big enough, cat could keep pulling data
> across the network and stay ahead of gunzip's appetite,
> thus reducing the total time to something near 20 secs.
> But pipes only have a 4k buffer.
> Before I go and write a mega-pipe that uses select() or
> poll() to implement a large buffer between two processes,
> does anyone know of any existing tool to do this?
More information about the ILUG