Utility to write huge files instantly??? - Storage

This is a discussion on Utility to write huge files instantly??? - Storage ; Windows 2000, FAT32 or NTFS. For testing, I want a utility to create BIG files ~1GB on storage media quickly. Copying a big file is not an option, too slow. I don't care what's in the file as long as ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 40

Thread: Utility to write huge files instantly???

  1. Utility to write huge files instantly???

    Windows 2000, FAT32 or NTFS.
    For testing,
    I want a utility to create BIG files ~1GB on storage media
    quickly.

    Copying a big file is not an option, too slow.

    I don't care what's in the file as long
    as the OS is happy that it's a "valid" file.
    Needs to work thru normal OS drive letters and drivers.

    Should be able to just write the FAT without doing anything to the
    actual allocation units being allocated???

    Anything like this exist?
    Thanks, mike
    --
    Return address is VALID!

  2. Re: Utility to write huge files instantly???

    > Should be able to just write the FAT without doing anything to the
    > actual allocation units being allocated???


    Do you need NTFS support?

    If you will go thru the usual filesystem, without patching the metadata
    manually - then lseek() to (1GB - 1) and then write 1 byte of zero. The OS will
    zero the whole 1GB automatically.

    This is recommended by MS's filesystem guys to create the minimally fragmented
    file, and is also portable to UNIXen (UNIXen will create a sparse file in this
    case though).

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  3. Re: Utility to write huge files instantly???

    Maxim S. Shatskih wrote:
    >> Should be able to just write the FAT without doing anything to the
    >> actual allocation units being allocated???

    >
    > Do you need NTFS support?
    >
    > If you will go thru the usual filesystem, without patching the metadata
    > manually - then lseek() to (1GB - 1) and then write 1 byte of zero. The OS will
    > zero the whole 1GB automatically.


    I don't think he wants to take the time to zero (or otherwise write) the
    actual file data. ISTR that an (undocumented?) API exists to
    preallocate a file without writing it, though (useful for file-copy
    operations where the final size is known, you want reasonable
    contiguity, but don't want the useless overhead of zeroing the output
    file before overwriting it with data: IIRC NT maintains 'high water
    mark' information similar to VMS's that keeps one from scavenging
    previous data on the disk in such cases before the real data is written).

    - bill

  4. Re: Utility to write huge files instantly???

    Bill Todd wrote:
    > Maxim S. Shatskih wrote:
    >>> Should be able to just write the FAT without doing anything to the
    >>> actual allocation units being allocated???

    >>
    >> Do you need NTFS support?
    >>
    >> If you will go thru the usual filesystem, without patching the metadata
    >> manually - then lseek() to (1GB - 1) and then write 1 byte of zero.
    >> The OS will
    >> zero the whole 1GB automatically.

    >
    > I don't think he wants to take the time to zero (or otherwise write) the
    > actual file data. ISTR that an (undocumented?) API exists to
    > preallocate a file without writing it, though (useful for file-copy
    > operations where the final size is known, you want reasonable
    > contiguity, but don't want the useless overhead of zeroing the output
    > file before overwriting it with data: IIRC NT maintains 'high water
    > mark' information similar to VMS's that keeps one from scavenging
    > previous data on the disk in such cases before the real data is written).
    >
    > - bill


    Thanks, but when I said "utility", I meant point/click/done.
    I can bungle my way thru a trivial VB6 program, but anything more than
    that is a stretch.

    I dug around in MSDN and found these:

    IStream::SetSize

    FileStream.SetLength Method

    CFile::SetLength

    But I'm not sure of their behavior relative to end of file and not
    taking a long time. And I'd be WAY beyond my comfort level trying to
    program it.

    Think "UTILITY". :-)

    mike

    --
    Return address is VALID!

  5. Re: Utility to write huge files instantly???

    mike wrote:
    > Windows 2000, FAT32 or NTFS.
    > For testing,
    > I want a utility to create BIG files ~1GB on storage media
    > quickly.
    >
    > Copying a big file is not an option, too slow.
    >
    > I don't care what's in the file as long
    > as the OS is happy that it's a "valid" file.
    > Needs to work thru normal OS drive letters and drivers.
    >
    > Should be able to just write the FAT without doing anything to the
    > actual allocation units being allocated???
    >
    > Anything like this exist?
    > Thanks, mike


    There are ports of the "dd" unix program for windows. It can be used to
    write giant files with minimal effort.

  6. Re: Utility to write huge files instantly???

    Cydrome Leader wrote:
    > mike wrote:
    >> Windows 2000, FAT32 or NTFS.
    >> For testing,
    >> I want a utility to create BIG files ~1GB on storage media
    >> quickly.
    >>
    >> Copying a big file is not an option, too slow.
    >>
    >> I don't care what's in the file as long
    >> as the OS is happy that it's a "valid" file.
    >> Needs to work thru normal OS drive letters and drivers.
    >>
    >> Should be able to just write the FAT without doing anything to the
    >> actual allocation units being allocated???
    >>
    >> Anything like this exist?
    >> Thanks, mike

    >
    > There are ports of the "dd" unix program for windows. It can be used to
    > write giant files with minimal effort.


    Thanks, but minimal effort is not nearly as important as minimal time.
    The version of dd I tried does work,

    dd if=infile of=outfile seek=2000000
    if is 200bytes.

    but it's only marginally faster
    than copying a file. I need something that's a thousand times or so
    faster than a file copy for a 1GB file.

    Maybe there's a version that doesn't fill the file with zeros, but just
    allocates the space?? Think about copying 16 gigabytes to a
    USB1.1 drive...while we're still young.

    --
    Return address is VALID!

  7. Re: Utility to write huge files instantly???

    mike wrote:
    > Cydrome Leader wrote:
    >> mike wrote:
    >>> Windows 2000, FAT32 or NTFS.
    >>> For testing,
    >>> I want a utility to create BIG files ~1GB on storage media
    >>> quickly.
    >>>
    >>> Copying a big file is not an option, too slow.
    >>>
    >>> I don't care what's in the file as long
    >>> as the OS is happy that it's a "valid" file.
    >>> Needs to work thru normal OS drive letters and drivers.
    >>>
    >>> Should be able to just write the FAT without doing anything to the
    >>> actual allocation units being allocated???
    >>>
    >>> Anything like this exist?
    >>> Thanks, mike

    >>
    >> There are ports of the "dd" unix program for windows. It can be used to
    >> write giant files with minimal effort.

    >
    > Thanks, but minimal effort is not nearly as important as minimal time.
    > The version of dd I tried does work,
    >
    > dd if=infile of=outfile seek=2000000
    > if is 200bytes.


    set a larger block size, bs=65536 etc. The default 512 byte blocks are
    slow.


    > but it's only marginally faster
    > than copying a file. I need something that's a thousand times or so
    > faster than a file copy for a 1GB file.
    >
    > Maybe there's a version that doesn't fill the file with zeros, but just
    > allocates the space?? Think about copying 16 gigabytes to a
    > USB1.1 drive...while we're still young.


    Unless you want sparse files, what you want isn't going to happen.

    16GB over USB 1.1 is just nonsense to start with.

    So, if you want to actually write lots of data, you're going ot have to
    wait. If you want giant empty files where the insides don't matter, you're
    not copying large amounts of data in the first place. The same applies to
    sparse files.

  8. Re: Utility to write huge files instantly???

    > I don't think he wants to take the time to zero (or otherwise write) the
    > actual file data. ISTR that an (undocumented?) API exists to


    SetEndOfFile, it is documented.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  9. Re: Utility to write huge files instantly???

    Cydrome Leader wrote:
    > mike wrote:
    >> Cydrome Leader wrote:
    >>> mike wrote:
    >>>> Windows 2000, FAT32 or NTFS.
    >>>> For testing,
    >>>> I want a utility to create BIG files ~1GB on storage media
    >>>> quickly.
    >>>>
    >>>> Copying a big file is not an option, too slow.
    >>>>
    >>>> I don't care what's in the file as long
    >>>> as the OS is happy that it's a "valid" file.
    >>>> Needs to work thru normal OS drive letters and drivers.
    >>>>
    >>>> Should be able to just write the FAT without doing anything to the
    >>>> actual allocation units being allocated???
    >>>>
    >>>> Anything like this exist?
    >>>> Thanks, mike
    >>> There are ports of the "dd" unix program for windows. It can be used to
    >>> write giant files with minimal effort.

    >> Thanks, but minimal effort is not nearly as important as minimal time.
    >> The version of dd I tried does work,
    >>
    >> dd if=infile of=outfile seek=2000000
    >> if is 200bytes.

    >
    > set a larger block size, bs=65536 etc. The default 512 byte blocks are
    > slow.


    That's clearly not what he wants.

    >
    >
    >> but it's only marginally faster
    >> than copying a file. I need something that's a thousand times or so
    >> faster than a file copy for a 1GB file.
    >>
    >> Maybe there's a version that doesn't fill the file with zeros, but just
    >> allocates the space?? Think about copying 16 gigabytes to a
    >> USB1.1 drive...while we're still young.

    >
    > Unless you want sparse files, what you want isn't going to happen.


    I suspect you're wrong: he probably just needs either to write (may
    require linking with ntdll.lib from the Platform SDK - see
    http://www.informit.com/articles/art...22442&seqNum=3) or to
    find a utility that uses NtCreateFile to create the file, and set the
    AllocationSize parameter to the size he wants the file to be.

    - bill

  10. Re: Utility to write huge files instantly???

    Maxim S. Shatskih wrote:
    >> I don't think he wants to take the time to zero (or otherwise write) the
    >> actual file data. ISTR that an (undocumented?) API exists to

    >
    > SetEndOfFile, it is documented.


    Indeed, and that's clearly the right approach since it seems to
    accomplish what's desired using a documented interface. But it was not
    what I was remembering (see my other reply).

    - bill

  11. Re: Utility to write huge files instantly???

    > find a utility that uses NtCreateFile to create the file, and set the
    > AllocationSize parameter to the size he wants the file to be.


    According to MS's filesystem guys, lseek+SetEndOfFile is better.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  12. Re: Utility to write huge files instantly???

    Bill Todd wrote:
    > Maxim S. Shatskih wrote:
    >>> I don't think he wants to take the time to zero (or otherwise write)
    >>> the actual file data. ISTR that an (undocumented?) API exists to

    >>
    >> SetEndOfFile, it is documented.

    >
    > Indeed, and that's clearly the right approach since it seems to
    > accomplish what's desired using a documented interface. But it was not
    > what I was remembering (see my other reply).
    >
    > - bill


    I have two problems writing code. I'm lazy.
    And I have no idea what I'm doing.
    I found a VB6 code example and hacked it as follows:

    Private Sub Command1_Click()
    Path = "x:\big1.txt"
    hFile = CreateFile(Path, GENERIC_WRITE, FILE_SHARE_READ Or
    FILE_SHARE_WRITE, ByVal 0&, OPEN_ALWAYS, 0, 0)
    If hFile = -1 Then End
    WriteFile hFile, ByVal "Very-very cool & long string", 28, BytesWritten,
    ByVal 0&

    SetFilePointer hFile, 900000000, 0, FILE_BEGIN
    SetEndOfFile hFile
    CloseHandle hFile
    ....

    This does make a big file, but it fills the file with zeros
    and takes 18 minutes to do it.
    Am I using the wrong arguments?

    I need to do this 16 times...I need to get rid of the 18 minutes X 16...

    mike

    --
    Return address is VALID!

  13. Re: Utility to write huge files instantly???

    mike wrote:
    > Bill Todd wrote:
    >> Maxim S. Shatskih wrote:
    >>>> I don't think he wants to take the time to zero (or otherwise write)
    >>>> the actual file data. ISTR that an (undocumented?) API exists to
    >>>
    >>> SetEndOfFile, it is documented.

    >>
    >> Indeed, and that's clearly the right approach since it seems to
    >> accomplish what's desired using a documented interface. But it was
    >> not what I was remembering (see my other reply).
    >>
    >> - bill

    >
    > I have two problems writing code. I'm lazy.
    > And I have no idea what I'm doing.
    > I found a VB6 code example and hacked it as follows:
    >
    > Private Sub Command1_Click()
    > Path = "x:\big1.txt"
    > hFile = CreateFile(Path, GENERIC_WRITE, FILE_SHARE_READ Or
    > FILE_SHARE_WRITE, ByVal 0&, OPEN_ALWAYS, 0, 0)
    > If hFile = -1 Then End
    > WriteFile hFile, ByVal "Very-very cool & long string", 28, BytesWritten,
    > ByVal 0&
    >
    > SetFilePointer hFile, 900000000, 0, FILE_BEGIN
    > SetEndOfFile hFile
    > CloseHandle hFile
    > ...
    >
    > This does make a big file, but it fills the file with zeros
    > and takes 18 minutes to do it.
    > Am I using the wrong arguments?


    It's possible that you're just using the wrong OS: I did find one
    reference that says that with Win2K and earlier systems SetEndOfFile
    zeros the space allocated, whereas with XP and later it does not
    (because ValidDataLength protects the garbage in what's been allocated
    from being read until it has been over-written: I thought that was true
    in Win2k as well, but I may have been mistaken - and indeed the MSDN
    documentation states that SetFileValidData only exists in XP and Vista,
    but the internal ValidDataLength guard existed at least as early as
    Win2K and thus there really would be no *need* for SetEndOfFile to zero
    allocated space there).

    - bill

  14. Re: Utility to write huge files instantly???

    Bill Todd wrote:
    > Cydrome Leader wrote:
    >> mike wrote:
    >>> Cydrome Leader wrote:
    >>>> mike wrote:
    >>>>> Windows 2000, FAT32 or NTFS.
    >>>>> For testing,
    >>>>> I want a utility to create BIG files ~1GB on storage media
    >>>>> quickly.
    >>>>>
    >>>>> Copying a big file is not an option, too slow.
    >>>>>
    >>>>> I don't care what's in the file as long
    >>>>> as the OS is happy that it's a "valid" file.
    >>>>> Needs to work thru normal OS drive letters and drivers.
    >>>>>
    >>>>> Should be able to just write the FAT without doing anything to the
    >>>>> actual allocation units being allocated???
    >>>>>
    >>>>> Anything like this exist?
    >>>>> Thanks, mike
    >>>> There are ports of the "dd" unix program for windows. It can be used to
    >>>> write giant files with minimal effort.
    >>> Thanks, but minimal effort is not nearly as important as minimal time.
    >>> The version of dd I tried does work,
    >>>
    >>> dd if=infile of=outfile seek=2000000
    >>> if is 200bytes.

    >>
    >> set a larger block size, bs=65536 etc. The default 512 byte blocks are
    >> slow.

    >
    > That's clearly not what he wants.


    What he really want is an instant free answer to some strange problem.

    I'd love to hear why one needs 16GB files that lack any real data stored
    across USB 1.1

  15. Re: Utility to write huge files instantly???

    Bill Todd wrote:
    > mike wrote:
    >> Bill Todd wrote:
    >>> Maxim S. Shatskih wrote:
    >>>>> I don't think he wants to take the time to zero (or otherwise
    >>>>> write) the actual file data. ISTR that an (undocumented?) API
    >>>>> exists to
    >>>>
    >>>> SetEndOfFile, it is documented.
    >>>
    >>> Indeed, and that's clearly the right approach since it seems to
    >>> accomplish what's desired using a documented interface. But it was
    >>> not what I was remembering (see my other reply).
    >>>
    >>> - bill

    >>
    >> I have two problems writing code. I'm lazy.
    >> And I have no idea what I'm doing.
    >> I found a VB6 code example and hacked it as follows:
    >>
    >> Private Sub Command1_Click()
    >> Path = "x:\big1.txt"
    >> hFile = CreateFile(Path, GENERIC_WRITE, FILE_SHARE_READ Or
    >> FILE_SHARE_WRITE, ByVal 0&, OPEN_ALWAYS, 0, 0)
    >> If hFile = -1 Then End
    >> WriteFile hFile, ByVal "Very-very cool & long string", 28,
    >> BytesWritten, ByVal 0&
    >>
    >> SetFilePointer hFile, 900000000, 0, FILE_BEGIN
    >> SetEndOfFile hFile
    >> CloseHandle hFile
    >> ...
    >>
    >> This does make a big file, but it fills the file with zeros
    >> and takes 18 minutes to do it.
    >> Am I using the wrong arguments?

    >
    > It's possible that you're just using the wrong OS: I did find one
    > reference that says that with Win2K and earlier systems SetEndOfFile
    > zeros the space allocated, whereas with XP and later it does not
    > (because ValidDataLength protects the garbage in what's been allocated
    > from being read until it has been over-written: I thought that was true
    > in Win2k as well, but I may have been mistaken - and indeed the MSDN
    > documentation states that SetFileValidData only exists in XP and Vista,
    > but the internal ValidDataLength guard existed at least as early as
    > Win2K and thus there really would be no *need* for SetEndOfFile to zero
    > allocated space there).
    >
    > - bill

    I'm testing it on an XP system. I am running it from within the VB6
    environment. Could that make a difference?
    Should compile it and try again.
    But I really do want to run it on a win2k laptop.
    mike

    --
    Return address is VALID!

  16. Re: Utility to write huge files instantly???

    Cydrome Leader wrote:
    > Bill Todd wrote:
    >> Cydrome Leader wrote:
    >>> mike wrote:
    >>>> Cydrome Leader wrote:
    >>>>> mike wrote:
    >>>>>> Windows 2000, FAT32 or NTFS.
    >>>>>> For testing,
    >>>>>> I want a utility to create BIG files ~1GB on storage media
    >>>>>> quickly.
    >>>>>>
    >>>>>> Copying a big file is not an option, too slow.
    >>>>>>
    >>>>>> I don't care what's in the file as long
    >>>>>> as the OS is happy that it's a "valid" file.
    >>>>>> Needs to work thru normal OS drive letters and drivers.
    >>>>>>
    >>>>>> Should be able to just write the FAT without doing anything to the
    >>>>>> actual allocation units being allocated???
    >>>>>>
    >>>>>> Anything like this exist?
    >>>>>> Thanks, mike
    >>>>> There are ports of the "dd" unix program for windows. It can be used to
    >>>>> write giant files with minimal effort.
    >>>> Thanks, but minimal effort is not nearly as important as minimal time.
    >>>> The version of dd I tried does work,
    >>>>
    >>>> dd if=infile of=outfile seek=2000000
    >>>> if is 200bytes.
    >>> set a larger block size, bs=65536 etc. The default 512 byte blocks are
    >>> slow.

    >> That's clearly not what he wants.

    >
    > What he really want is an instant free answer to some strange problem.


    EXACTLY!! That's what the internet is for, gain from the experience of
    others, not reinvent the wheel, etc. I had no idea this would be a
    difficult problem for a real programmer experienced with storage
    architecture.
    >
    > I'd love to hear why one needs 16GB files that lack any real data stored
    > across USB 1.1

    Not at liberty to say exactly why. Just need to fill space quickly.
    USB (1.1 or 2.0 they're both too slow) is exactly the reason I can't
    wait for the files to be filled up with data...garbage is fine...I'm
    never gonna read it anyway. Just need the OS to think there's a valid
    file there.

    mike


    --
    Return address is VALID!

  17. Re: Utility to write huge files instantly???

    > I'm testing it on an XP system. I am running it from within the VB6
    > environment. Could that make a difference?


    No.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com


  18. Re: Utility to write huge files instantly???

    mike wrote:
    > Cydrome Leader wrote:
    >> Bill Todd wrote:
    >>> Cydrome Leader wrote:
    >>>> mike wrote:
    >>>>> Cydrome Leader wrote:
    >>>>>> mike wrote:
    >>>>>>> Windows 2000, FAT32 or NTFS.
    >>>>>>> For testing,
    >>>>>>> I want a utility to create BIG files ~1GB on storage media
    >>>>>>> quickly.
    >>>>>>>
    >>>>>>> Copying a big file is not an option, too slow.
    >>>>>>>
    >>>>>>> I don't care what's in the file as long
    >>>>>>> as the OS is happy that it's a "valid" file.
    >>>>>>> Needs to work thru normal OS drive letters and drivers.
    >>>>>>>
    >>>>>>> Should be able to just write the FAT without doing anything to the
    >>>>>>> actual allocation units being allocated???
    >>>>>>>
    >>>>>>> Anything like this exist?
    >>>>>>> Thanks, mike
    >>>>>> There are ports of the "dd" unix program for windows. It can be used to
    >>>>>> write giant files with minimal effort.
    >>>>> Thanks, but minimal effort is not nearly as important as minimal time.
    >>>>> The version of dd I tried does work,
    >>>>>
    >>>>> dd if=infile of=outfile seek=2000000
    >>>>> if is 200bytes.
    >>>> set a larger block size, bs=65536 etc. The default 512 byte blocks are
    >>>> slow.
    >>> That's clearly not what he wants.

    >>
    >> What he really want is an instant free answer to some strange problem.

    >
    > EXACTLY!! That's what the internet is for, gain from the experience of
    > others, not reinvent the wheel, etc. I had no idea this would be a
    > difficult problem for a real programmer experienced with storage
    > architecture.


    You might want to remember real programmers and storage people work on
    real problems and real data, not nonsense.

    >> I'd love to hear why one needs 16GB files that lack any real data stored
    >> across USB 1.1

    > Not at liberty to say exactly why. Just need to fill space quickly.
    > USB (1.1 or 2.0 they're both too slow) is exactly the reason I can't
    > wait for the files to be filled up with data...garbage is fine...I'm
    > never gonna read it anyway. Just need the OS to think there's a valid
    > file there.
    >
    > mike


    Well, I had an answer for the problem, but it's too secret to talk about.


  19. Re: Utility to write huge files instantly???

    Cydrome Leader wrote:

    ....

    >> That's clearly not what he wants.

    >
    > What he really want is an instant free answer to some strange problem.


    And for some reason you seem to have a problem with that.

    Others here don't. If you have nothing useful to contribute, you might
    consider just shutting up.

    - bill

  20. Re: Utility to write huge files instantly???

    Bill Todd wrote:
    > Cydrome Leader wrote:
    >
    > ...
    >
    >>> That's clearly not what he wants.

    >>
    >> What he really want is an instant free answer to some strange problem.

    >
    > And for some reason you seem to have a problem with that.
    >
    > Others here don't. If you have nothing useful to contribute, you might
    > consider just shutting up.
    >
    > - bill

    Now, boyz, let's not fight.
    I found the solution in another newsgroup.
    Thanks, mike

    --
    Return address is VALID!

+ Reply to Thread
Page 1 of 2 1 2 LastLast