Newbie - Optimizing an NFS client - Linux

This is a discussion on Newbie - Optimizing an NFS client - Linux ; Hi I'm writing an NFS client - a large number of files are periodically written to directory on the client end , then transported over nfs to a remote server and then deleted at the client's end. I'm sure it's ...

+ Reply to Thread
Results 1 to 4 of 4

Thread: Newbie - Optimizing an NFS client

  1. Newbie - Optimizing an NFS client

    Hi
    I'm writing an NFS client - a large number of files are periodically
    written to directory on the
    client end , then transported over nfs to a remote server and then
    deleted at the client's end.

    I'm sure it's not as simple as a remote mount, fopen() and then write
    to the remote FileSystem

    I'd appreciate if someone could point out various issues/optimizations
    which I need to take care of.

    thanks!
    JON


  2. Re: Newbie - Optimizing an NFS client

    jon wayne wrote in part:
    > I'm writing an NFS client - a large number of files are
    > periodically written to directory on the client end , then
    > transported over nfs to a remote server and then deleted
    > at the client's end.


    > I'm sure it's not as simple as a remote mount, fopen()
    > and then write to the remote FileSystem


    > I'd appreciate if someone could point out various
    > issues/optimizations which I need to take care of.


    I'm sure you can find lots of guides to tuning NFS.

    I'd just suggest you look at userspace optimizations like
    tar'ing your files before NFS (if many small), and/or breaking
    off incremental update files (if large).

    -- Robert




  3. Re: Newbie - Optimizing an NFS client

    jon wayne wrote:
    > Hi
    > I'm writing an NFS client - a large number of files are periodically
    > written to directory on the
    > client end , then transported over nfs to a remote server and then
    > deleted at the client's end.


    If possible... use rsync instead. Especially if the remote
    server is simply an up-to-date respository from the copy. Sounds
    like the files may be unique every time though since they
    are deleted on the client side (??).

    It will be much, much, much faster than NFS (should be).

    If they are always different... then doing just a tar
    copy via ssh will probably work well. If the connection
    is horribly slow between the client and remote server,
    you can even compress for going over the wire.

    >
    > I'm sure it's not as simple as a remote mount, fopen() and then write
    > to the remote FileSystem
    >
    > I'd appreciate if someone could point out various issues/optimizations
    > which I need to take care of.
    >


    NFS has issues. Especially if not running v4... and I'm not sure
    NFSv4 is quite ready Linux wise yet.

    (if you are running NFS, you are at risk of potential filesystem
    corruption... with that said, I run a lot of NFS.. mostly v3/tcp.
    Be very wary of running NFS udp over gigabit.... less chance
    if all are somewhat contemporary Linux, bigger problems are
    with old Unix OS's where an upgraded gigabit NIC has been
    added.)



  4. Re: Newbie - Optimizing an NFS client

    Hi
    Thanks for the reply.

    Ok - I made a mistake - what i meant was that i was writing an NFS
    client App - not the
    client itself - my apologies for taking your time.

    But I still had some doubts

    [
    environ
    linux > 2.6
    nfs v4
    ]
    1. Does every fwrite (of the system page size say 4k) transfer the data
    block immediately to the server?

    2. Will the fclose() block until a response for COMMIT is recieved from
    the server ? (i'm
    avoiding an fsync after every write)
    If it doesn't block then the client app has no way of knowing
    whether the data has been actually sent across or not.

    3. What am really concerned about is that once i've closed the file fd
    does it guarantee
    that the data has been successfully written over the server??


    thanks!






    Chris Cox wrote:
    > jon wayne wrote:
    > > Hi
    > > I'm writing an NFS client - a large number of files are periodically
    > > written to directory on the
    > > client end , then transported over nfs to a remote server and then
    > > deleted at the client's end.

    >
    > If possible... use rsync instead. Especially if the remote
    > server is simply an up-to-date respository from the copy. Sounds
    > like the files may be unique every time though since they
    > are deleted on the client side (??).
    >
    > It will be much, much, much faster than NFS (should be).
    >
    > If they are always different... then doing just a tar
    > copy via ssh will probably work well. If the connection
    > is horribly slow between the client and remote server,
    > you can even compress for going over the wire.
    >
    > >
    > > I'm sure it's not as simple as a remote mount, fopen() and then write
    > > to the remote FileSystem
    > >
    > > I'd appreciate if someone could point out various issues/optimizations
    > > which I need to take care of.
    > >

    >
    > NFS has issues. Especially if not running v4... and I'm not sure
    > NFSv4 is quite ready Linux wise yet.
    >
    > (if you are running NFS, you are at risk of potential filesystem
    > corruption... with that said, I run a lot of NFS.. mostly v3/tcp.
    > Be very wary of running NFS udp over gigabit.... less chance
    > if all are somewhat contemporary Linux, bigger problems are
    > with old Unix OS's where an upgraded gigabit NIC has been
    > added.)



+ Reply to Thread