[ILUG] Rescue file from damaged CDR?
kenn at linux.ie
Thu Sep 14 23:57:28 IST 2000
On Thu, Sep 14, 2000 at 08:00:32PM +0100, Conor Daly wrote:
> Ideas anyone?
> Trying to rescue a ~28Mb file from a damaged CDR. I'm trying
> dd if=/mnt/cdrom/file of=first.out
> once I get an I/O error I get a count of blocks read by dd so I do it
> again using
> dd if=/mnt/cdrom/file of=seconf.out skip=<blocks already read>+1
> and so on until I run out of blocks to read. I'll then use a single block
> fragment to pad out to the original size and do a
> cat first.out block-fragment second.out block-fragment third.out ...... > recovered-file
> Will this work? Is there a better way?
You might be able to automate this with the noerror and sync options to dd:
dd if=/mnt/cdrom/file of=file bs=2048 noerror sync
'noerror' says continue after read errors. 'sync' says pad out all input
blocks with NULs.
However, I don't know the failed read results in an input block with zero
length that gets padded out and written, or results in that block not
being written _at_all_ to the output file.
To test it, use count=<num-blocks> to restrict the copy to the start of
the file up to just past the first damaged block. Then check if the
output file is num-blocks*2048, or smaller. If smaller, this won't work.
Alternatively, do it piecemeal:
NUM_BLOCKS=14000 # 28MB/2048 - work this out exactly
# ...or better yet, work it out in code
for i in `seq 0 $[ $NUM_BLOCKS - 1 ] ` ; do
dd if=$INFILE of=$OUTFILE bs=$BLOCK_SIZE count=1 seek=$i skip=$i
If doesn't matter if dd writes a block of zeros or not for a bad block
here, because the next block will still be written at the correct
position (skip=$i), extending the file if necessary.
More information about the ILUG