[cygwin] DD bug fails to wipe last 48 sectors of a disk

Jason Pyeron jpyeron@pdinc.us
Sun Jun 28 17:50:16 GMT 2020


> -----Original Message-----
> From: Christian Franke
> Sent: Sunday, June 28, 2020 10:35 AM
> 
> Andrey Repin wrote:
> >
> >> dd if=/dev/zero of=/dev/sda iflag=fullblock bs=4M status=progress
> 
> The root of the problem is that the Windows WriteFile() function
> apparently does not support truncated writes at EOM. If seek_position +
> write_size > disk_size, then WriteFile() does nothing and returns an error.
> 
> 
> > oflag=direct
> >
> > Although I'm unsure how Cygwin/Windows handles it. But without this flag, the
> > write is cached, and the problem may be outside dd, or even Cygwin.
> 
> If 'oflag=direct' is used, dd passes O_DIRECT flag to open() call of
> output file. Cygwin's open() function then passes
> FILE_NO_INTERMEDIATE_BUFFERING to NtCreateFile() and the write()
> function calls WriteFile() directly with original address and size.
> 
> Without O_DIRECT, Cygwin ensures that address and size passed to
> WriteFile() are both aligned to sector size. All writes are then done
> through a 64KiB internal buffer.
> 
> As a consequence, oflag=direct in the above dd command may increase
> speed but would also let the final 4MiB WriteFile() fail. Without
> oflag=direct, only the last 64KiB WriteFile() fails.
> 
> To clear the last sectors of the disk, use an appropriate small block
> size. I did this several times with Cygwin 'dd seek=... bs=512 ...' to
> get rid of Intel RST RAID metadata.

The reason I have never encountered this is because I use a block size which is the largest practical GCD of the drive size and 512 bytes (typically between 32 MB and 64 MB).

E.g. I have a drive that is 160,041,885,696 bytes, which divides 312,581,808 times evenly into 512. I would use a block size of ‭39,072,726‬ bytes, which gives 4,096 blocks to write.

-Jason



More information about the Cygwin mailing list