NFS - OpenVMS to OpenVMS - VMS

This is a discussion on NFS - OpenVMS to OpenVMS - VMS ; Hi all I'm working on migrating source code from Alpha to Itanium. The source code is in CMS on a single node Alpha. The new dev-env. Itanium will also be a single node. Both the Alpha and Itanium only have ...

+ Reply to Thread
Results 1 to 11 of 11

Thread: NFS - OpenVMS to OpenVMS

  1. NFS - OpenVMS to OpenVMS

    Hi all

    I'm working on migrating source code from Alpha to Itanium. The
    source code is in CMS on a single node Alpha. The new dev-env. Itanium
    will also be a single node. Both the Alpha and Itanium only have TCP/
    IP installed, that is NO DECnet and that is not an option either (at
    least for now).

    I've been trying to NFS export the source-code-disk from the Alpha
    and mounting it on the Itanium. The TCP/IP Services versions are:

    HP TCP/IP Services for OpenVMS Alpha Version V5.5 - ECO 1
    on an AlphaServer ES45 Model 2 running OpenVMS V8.2

    HP TCP/IP Services for OpenVMS Industry Standard 64 Version V5.6 -
    ECO 2
    on an HP rx1620 (1.60GHz/3.0MB) running OpenVMS E8.3-1H1

    On the Alpha I have the source-code disk mapped as '/src' and
    exported to the Itanium only.

    Pathname Logical File System
    /src ALPHA$DKB2:

    File System Host name
    /src ia64.somedomain

    I have a NFS proxy for my user and the system account as:
    VMS User_name Type User_ID Group_ID Host_name

    USER OND 40 2 ia64.somedomain
    SYSTEM OND 1 4 ia64.somedomain

    I only have proxies setup on the Alpha, it is sometimes mentioned in
    the documentation that proxies should be set up on the NFS-client side
    as well, but I haven't figured out why that should be neccessary. Both
    the NFS and PORTMAPPER server components are started on the Alpha and
    on the Itanium only the NFS Client client component is started.

    I'm mounting the source-code-disk using the following command:
    $tcpip mount src: src src: /path="/src" /host=alpha /structure=5 /
    system

    OPCOM says:
    %%%%%%%%%%% OPCOM 14-AUG-2008 13:58:59.15 %%%%%%%%%%%
    Message from user TCPIP$NFS on ALPHA
    %TCPIP-S-NFS_MNTSUC, mounted file system /src
    -TCPIP-S-NFS_CLIENT, uid=40 gid=2 host_name = ia64.somedomain

    Now to my questions.

    1) How can I get the same 'logical'/diskname on the client instead
    of new DNFSn: at every mount ? (I've tried /PROCESSOR=SAMENFS20
    without success)

    2) Has anyone similar setup and is will to share setup hints etc ?

    3) I've also been trying to mount the disk onto Linux but I always
    get the 'permission' denied when accessing the mount. The mountpoint
    on Linux looks like:
    'drwxr-x--x 2 nobody nogroup 512 2008-08-14 13:07 src/'

    I have had a proxy setup for my Linux user but the 'uid' has always
    showed up as 0 in OPCOM.

    %%%%%%%%%%% OPCOM 13-AUG-2008 19:47:56.98 %%%%%%%%%%%
    Message from user TCPIP$NFS on ALPHA
    %TCPIP-S-NFS_MNTSUC, mounted file system /src
    -TCPIP-S-NFS_CLIENT, uid=0 gid=1002 host_name = linux.somedomain

    USER OND 1001 1002
    linux.somedomain

    From /etc/fstab
    alpha:/src /mnt/alpha/src nfs
    rw,user,rsize=8192,wsize=8192,nolock,proto=udp,har d,intr,nfsvers=3 0 0

    Has anyone a workaround for that ?

    4) I've been reading chapter 22 (NFS Server) and chapter 23 (NFS
    Client) atleast a dozen times, but I dont seem to be able to
    understand the 'noproxy_id/noproxy_gid' stuff. I someone willing to
    share some light on that.

    Regards
    - Ingi



  2. Re: NFS - OpenVMS to OpenVMS

    Ingi wrote:
    > Hi all
    >
    > I'm working on migrating source code from Alpha to Itanium. The
    > source code is in CMS on a single node Alpha. The new dev-env. Itanium
    > will also be a single node. Both the Alpha and Itanium only have TCP/
    > IP installed, that is NO DECnet and that is not an option either (at
    > least for now).
    >
    > I've been trying to NFS export the source-code-disk from the Alpha
    > and mounting it on the Itanium. The TCP/IP Services versions are:
    >
    > HP TCP/IP Services for OpenVMS Alpha Version V5.5 - ECO 1
    > on an AlphaServer ES45 Model 2 running OpenVMS V8.2
    >
    > HP TCP/IP Services for OpenVMS Industry Standard 64 Version V5.6 -
    > ECO 2
    > on an HP rx1620 (1.60GHz/3.0MB) running OpenVMS E8.3-1H1
    >
    > On the Alpha I have the source-code disk mapped as '/src' and
    > exported to the Itanium only.
    >
    > Pathname Logical File System
    > /src ALPHA$DKB2:
    >
    > File System Host name
    > /src ia64.somedomain
    >
    > I have a NFS proxy for my user and the system account as:
    > VMS User_name Type User_ID Group_ID Host_name
    >
    > USER OND 40 2 ia64.somedomain
    > SYSTEM OND 1 4 ia64.somedomain
    >
    > I only have proxies setup on the Alpha, it is sometimes mentioned in
    > the documentation that proxies should be set up on the NFS-client side
    > as well, but I haven't figured out why that should be neccessary. Both
    > the NFS and PORTMAPPER server components are started on the Alpha and
    > on the Itanium only the NFS Client client component is started.
    >
    > I'm mounting the source-code-disk using the following command:
    > $tcpip mount src: src src: /path="/src" /host=alpha /structure=5 /
    > system
    >
    > OPCOM says:
    > %%%%%%%%%%% OPCOM 14-AUG-2008 13:58:59.15 %%%%%%%%%%%
    > Message from user TCPIP$NFS on ALPHA
    > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > -TCPIP-S-NFS_CLIENT, uid=40 gid=2 host_name = ia64.somedomain
    >
    > Now to my questions.
    >
    > 1) How can I get the same 'logical'/diskname on the client instead
    > of new DNFSn: at every mount ? (I've tried /PROCESSOR=SAMENFS20
    > without success)
    >
    > 2) Has anyone similar setup and is will to share setup hints etc ?
    >
    > 3) I've also been trying to mount the disk onto Linux but I always
    > get the 'permission' denied when accessing the mount. The mountpoint
    > on Linux looks like:
    > 'drwxr-x--x 2 nobody nogroup 512 2008-08-14 13:07 src/'
    >
    > I have had a proxy setup for my Linux user but the 'uid' has always
    > showed up as 0 in OPCOM.
    >
    > %%%%%%%%%%% OPCOM 13-AUG-2008 19:47:56.98 %%%%%%%%%%%
    > Message from user TCPIP$NFS on ALPHA
    > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > -TCPIP-S-NFS_CLIENT, uid=0 gid=1002 host_name = linux.somedomain
    >
    > USER OND 1001 1002
    > linux.somedomain
    >
    > From /etc/fstab
    > alpha:/src /mnt/alpha/src nfs
    > rw,user,rsize=8192,wsize=8192,nolock,proto=udp,har d,intr,nfsvers=3 0 0
    >
    > Has anyone a workaround for that ?
    >
    > 4) I've been reading chapter 22 (NFS Server) and chapter 23 (NFS
    > Client) atleast a dozen times, but I dont seem to be able to
    > understand the 'noproxy_id/noproxy_gid' stuff. I someone willing to
    > share some light on that.
    >
    > Regards
    > - Ingi
    >
    >



    Maybe it's better if you tell us *why* you need this NFS setup.

    Is it just during he migration period ? Then you could
    make a plain copy (BACKUP/ZIP/FTP), setup your I64
    environment and make a final copy when you "switch".

    Or are you going to develop for both platforms using the
    same sources ? Then I guess that a "real" cluster setup
    will give you much better functionality then NFS.

    Or maybe you need to run the migration and new Alpha
    development concurently ? Then I guess that a cluster
    setup still is the best way.

    Note that NFS lacks a lot whan it comes to e.g. locking.

  3. Re: NFS - OpenVMS to OpenVMS

    On Aug 14, 3:35*pm, Jan-Erik Söderholm
    wrote:
    > Ingi wrote:
    > > Hi all

    >
    > > * I'm working on migrating source code from Alpha to Itanium. The
    > > source code is in CMS on a single node Alpha. The new dev-env. Itanium
    > > will also be a single node. Both the Alpha and Itanium only have TCP/
    > > IP installed, that is NO DECnet and that is not an option either (at
    > > least for now).

    >
    > > * I've been trying to NFS export the source-code-disk from the Alpha
    > > and mounting it on the Itanium. The TCP/IP Services versions are:

    >
    > > * HP TCP/IP Services for OpenVMS Alpha Version V5.5 - ECO 1
    > > * on an AlphaServer ES45 Model 2 running OpenVMS V8.2

    >
    > > * HP TCP/IP Services for OpenVMS Industry Standard 64 Version V5.6 -
    > > ECO 2
    > > * on an HP rx1620 *(1.60GHz/3.0MB) running OpenVMS E8.3-1H1

    >
    > > * On the Alpha I have the source-code disk mapped as '/src' and
    > > exported to the Itanium only.

    >
    > > * Pathname * * * * * * * * * * * * * * * *Logical File System
    > > * /src * * * * * * * * * * * * * * * * * * * * * ALPHA$DKB2:

    >
    > > * File System * * * * * * * * * * * * * *Host name
    > > * /src * * * * * * * * * * * * * * * * * * * * *ia64.somedomain

    >
    > > * I have a NFS proxy for my user and the system account as:
    > > * VMS User_name * * Type * * *User_ID * *Group_ID * Host_name

    >
    > > * USER * * * * * * *OND * * * * * *40 * * * * * 2 * ia64.somedomain
    > > * SYSTEM * * * * *OND * * * * * * 1 * * ** * 4 * *ia64.somedomain

    >
    > > * I only have proxies setup on the Alpha, it is sometimes mentioned in
    > > the documentation that proxies should be set up on the NFS-client side
    > > as well, but I haven't figured out why that should be neccessary. Both
    > > the NFS and PORTMAPPER server components are started on the Alpha and
    > > on the Itanium only the NFS Client client component is started.

    >
    > > * I'm mounting the source-code-disk using the following command:
    > > * $tcpip mount src: src src: /path="/src" /host=alpha /structure=5 /
    > > system

    >
    > > * OPCOM says:
    > > %%%%%%%%%%% *OPCOM *14-AUG-2008 13:58:59.15 *%%%%%%%%%%%
    > > Message from user TCPIP$NFS on ALPHA
    > > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > > -TCPIP-S-NFS_CLIENT, uid=40 gid=2 host_name = ia64.somedomain

    >
    > > * Now to my questions.

    >
    > > * 1) How can I get the same 'logical'/diskname on the client instead
    > > of new DNFSn: at every mount ? (I've tried /PROCESSOR=SAMENFS20
    > > without success)

    >
    > > * 2) Has anyone similar setup and is will to share setup hints etc ?

    >
    > > * 3) I've also been trying to mount the disk onto Linux but I always
    > > get the 'permission' denied when accessing the mount. The mountpoint
    > > on Linux looks like:
    > > * 'drwxr-x--x 2 nobody nogroup 512 2008-08-14 13:07 src/'

    >
    > > * I have had a proxy setup for my Linux user but the 'uid' has always
    > > showed up as 0 in OPCOM.

    >
    > > %%%%%%%%%%% *OPCOM *13-AUG-2008 19:47:56.98 *%%%%%%%%%%%
    > > Message from user TCPIP$NFS on ALPHA
    > > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > > -TCPIP-S-NFS_CLIENT, uid=0 gid=1002 host_name = linux.somedomain

    >
    > > * USER * * * * * * *OND * * * * * *1001 ** * * * 1002
    > > linux.somedomain

    >
    > > * From /etc/fstab
    > > alpha:/src /mnt/alpha/src nfs
    > > rw,user,rsize=8192,wsize=8192,nolock,proto=udp,har d,intr,nfsvers=3 0 0

    >
    > > * Has anyone a workaround for that ?

    >
    > > * 4) I've been reading chapter 22 (NFS Server) and chapter 23 (NFS
    > > Client) atleast a dozen times, but I dont seem to be able to
    > > understand the 'noproxy_id/noproxy_gid' stuff. I someone willing to
    > > share some light on that.

    >
    > > * Regards
    > > - Ingi

    >
    > Maybe it's better if you tell us *why* you need this NFS setup.
    >
    > Is it just during he migration period ? Then you could
    > make a plain copy (BACKUP/ZIP/FTP), setup your I64
    > environment and make a final copy when you "switch".
    >
    > Or are you going to develop for both platforms using the
    > same sources ? Then I guess that a "real" cluster setup
    > will give you much better functionality then NFS.
    >
    > Or maybe you need to run the migration and new Alpha
    > development concurently ? Then I guess that a cluster
    > setup still is the best way.
    >
    > Note that NFS lacks a lot whan it comes to e.g. locking.


    Hi and thank you for your answer.

    The reason why I'm looking at NFS is because a DECdfs based solution
    requires DECnet (please correct me if I'm wrong). The NFS solutions
    would only be temporary i.e. when migration/porting is complete the
    Alpha will be decoupled. I guess the migration period will be a
    maximum of 6 months. Performance of NFS is not critical (yet). I would
    prefer NFS over 'backup/zip/ftp' but I would more preferably choose
    DECdfs but again, DECnet is not an option (yet...). Is it possible to
    run DECdfs over TCP/IP only ?

    Because the dev-env is a single-node solution, wouldn't clustering
    be a little overkill ? (it would naturally give us the possibility to
    add other nodes/disks but then again we dont need that).

    Looks like I'll be resolving to the 'backup/zip/ftp'. I know that
    works because people have been doing that for ages...

    I assume that the NFS server/client in TCP/IP services is more for
    integration of OpenVMS with *nix environments rather than OpenVMS to
    OpenVMS.

    Regards
    - Ingi





  4. Re: NFS - OpenVMS to OpenVMS

    Ingi wrote:

    >[...snip...]
    >
    > The reason why I'm looking at NFS is because a DECdfs based solution
    > requires DECnet (please correct me if I'm wrong). The NFS solutions
    > would only be temporary i.e. when migration/porting is complete the
    > Alpha will be decoupled. I guess the migration period will be a
    > maximum of 6 months. Performance of NFS is not critical (yet). I would
    > prefer NFS over 'backup/zip/ftp' but I would more preferably choose
    > DECdfs but again, DECnet is not an option (yet...). Is it possible to
    > run DECdfs over TCP/IP only ?


    No, DECdfs will not run over TCP/IP, but since you are now beginning
    to mention DECdfs, and you seem to be alluding that you might have
    the option of DECnet (sometime...), and I'm (perhaps wrongly)
    assuming that a pure DECnet solution is excluded because of some
    "network decree", are you aware that you can run DECnet Phase V
    over TCP/IP ?

  5. Re: NFS - OpenVMS to OpenVMS

    In article <48a443ff$0$90265$14726298@news.sunsite.dk>, "R.A.Omond" writes:
    >Ingi wrote:
    >
    >>[...snip...]
    >>
    >> The reason why I'm looking at NFS is because a DECdfs based solution
    >> requires DECnet (please correct me if I'm wrong). The NFS solutions
    >> would only be temporary i.e. when migration/porting is complete the
    >> Alpha will be decoupled. I guess the migration period will be a
    >> maximum of 6 months. Performance of NFS is not critical (yet). I would
    >> prefer NFS over 'backup/zip/ftp' but I would more preferably choose
    >> DECdfs but again, DECnet is not an option (yet...). Is it possible to
    >> run DECdfs over TCP/IP only ?

    >
    >No, DECdfs will not run over TCP/IP, but since you are now beginning
    >to mention DECdfs, and you seem to be alluding that you might have
    >the option of DECnet (sometime...), and I'm (perhaps wrongly)
    >assuming that a pure DECnet solution is excluded because of some
    >"network decree", are you aware that you can run DECnet Phase V
    >over TCP/IP ?


    You beat me to it. That would be the best solution short of clustering
    the 2 machines and serving the drive(s).

    --
    VAXman- A Bored Certified VMS Kernel Mode Hacker VAXman(at)TMESIS(dot)COM

    .... pejorative statements of opinion are entitled to constitutional protection
    no matter how extreme, vituperous, or vigorously expressed they may be. (NJSC)

    Copr. 2008 Brian Schenkenberger. Publication of _this_ usenet article outside
    of usenet _must_ include its contents in its entirety including this copyright
    notice, disclaimer and quotations.

  6. Re: NFS - OpenVMS to OpenVMS

    On Aug 14, 5:14*pm, VAXman- @SendSpamHere.ORG wrote:
    > In article <48a443ff$0$90265$14726...@news.sunsite.dk>, "R.A.Omond" writes:
    >
    >
    >
    > >Ingi wrote:

    >
    > >>[...snip...]

    >
    > >> * The reason why I'm looking at NFS is because a DECdfs based solution
    > >> requires DECnet (please correct me if I'm wrong). The NFS solutions
    > >> would only be temporary i.e. when migration/porting is complete the
    > >> Alpha will be decoupled. I guess the migration period will be a
    > >> maximum of 6 months. Performance of NFS is not critical (yet). I would
    > >> prefer NFS over 'backup/zip/ftp' but I would more preferably choose
    > >> DECdfs but again, DECnet is not an option (yet...). Is it possible to
    > >> run DECdfs over TCP/IP only ?

    >
    > >No, DECdfs will not run over TCP/IP, but since you are now beginning
    > >to mention DECdfs, and you seem to be alluding that you might have
    > >the option of DECnet (sometime...), and I'm (perhaps wrongly)
    > >assuming that a pure DECnet solution is excluded because of some
    > >"network decree", are you aware that you can run DECnet Phase V
    > >over TCP/IP ?

    >
    > You beat me to it. *That would be the best solution short of clustering
    > the 2 machines and serving the drive(s).
    >
    > --
    > VAXman- A Bored Certified VMS Kernel Mode Hacker * * *VAXman(at)TMESIS(dot)COM
    >
    > ... pejorative statements of opinion are entitled to constitutional protection
    > no matter how extreme, vituperous, or vigorously expressed they may be. (NJSC)
    >
    > Copr. 2008 Brian Schenkenberger. *Publication of _this_ usenet article outside
    > of usenet _must_ include its contents in its entirety including this copyright
    > notice, disclaimer and quotations.


    Thank you all for your input.

    Running DECdfs is not an option. We dont have DECnet (Phase
    whatever) in production env. so we wont have any DECnet (phase...)
    anywhere here either.

    But we're going to create a cluster w. TCP/IP only, that will give
    us mountable disks between the nodes over the network. We'll also be
    using 'backup/zip/ftp' so hello DCL here I come, hopefully I will
    never ever:
    - forget including a file in the backup
    - loosing any changes by overwriting locally changed files.

    :-)

    I'd appreciate any input/answer/suggestions to questions 3) and 4)
    in my original post (sorry for it's length).

    Regards
    - Ingi

  7. Re: NFS - OpenVMS to OpenVMS

    Ingi wrote:

    > I only have proxies setup on the Alpha, it is sometimes mentioned in
    > the documentation that proxies should be set up on the NFS-client side
    > as well, but I haven't figured out why that should be neccessary.



    NFS is based on authentication of a group ID and user ID which are
    "foreign" to VMS, but native to Unix.

    You need to have the client map a VMS username to a GID and UID so that
    the client will be sending requests to the server with some GID/UID
    combination, and the server can then translate that GID/UID combo back
    into a VMS username based on its proxy database.

    When the client does not have a proxy, it uses a default UID/GID (can't
    remember the values right now).

  8. Re: NFS - OpenVMS to OpenVMS

    Ingi wrote:
    >
    > Hi all
    >
    > I'm working on migrating source code from Alpha to Itanium. The
    > source code is in CMS on a single node Alpha. The new dev-env. Itanium
    > will also be a single node. Both the Alpha and Itanium only have TCP/
    > IP installed, that is NO DECnet and that is not an option either (at
    > least for now).

    [snip]

    Well, rather than mess with NFS, what I recommend - if you have some
    extra disk space - is to use the ZIP utility to create an archive of the
    entire source tree:

    $ ZIP/VMS/KEEP/NODIR/LEVEL=8 SOURCE.ZIP [...]*.*;*

    ...., FTP the .ZIP file over to the target machine, then UNZIP it on the
    target:

    $ UNZIP/VERS SOURCE.ZIP

    The UN*X equivalent would be a GZIPped tarball.

    Dunno if that can help you, but it may be something to consider.

    D.J.D.

  9. Re: NFS - OpenVMS to OpenVMS

    In article <7be0035c-5625-4373-9c2b-f21f55804830@w7g2000hsa.googlegroups.com>, Ingi writes:
    >
    > But we're going to create a cluster w. TCP/IP only, that will give
    > us mountable disks between the nodes over the network. We'll also be
    > using 'backup/zip/ftp' so hello DCL here I come, hopefully I will
    > never ever:


    If you're connecting two systems with NFS, fine. But beware of
    using the word "cluster" around here when that's all you've done.

    And soon, HP tells us, you'll be able to VMScluster over IP, but
    if you're not allowed to DECnet over IP, you may not be allowed to
    VMScluster over IP.


  10. Re: NFS - OpenVMS to OpenVMS

    On Aug 14, 1:31 pm, Ingi wrote:
    > Hi all
    >
    > I'm working on migrating source code from Alpha to Itanium. The
    > source code is in CMS on a single node Alpha. The new dev-env. Itanium
    > will also be a single node. Both the Alpha and Itanium only have TCP/
    > IP installed, that is NO DECnet and that is not an option either (at
    > least for now).
    >
    > I've been trying to NFS export the source-code-disk from the Alpha
    > and mounting it on the Itanium. The TCP/IP Services versions are:
    >
    > HP TCP/IP Services for OpenVMS Alpha Version V5.5 - ECO 1
    > on an AlphaServer ES45 Model 2 running OpenVMS V8.2
    >
    > HP TCP/IP Services for OpenVMS Industry Standard 64 Version V5.6 -
    > ECO 2
    > on an HP rx1620 (1.60GHz/3.0MB) running OpenVMS E8.3-1H1
    >
    > On the Alpha I have the source-code disk mapped as '/src' and
    > exported to the Itanium only.
    >
    > Pathname Logical File System
    > /src ALPHA$DKB2:
    >
    > File System Host name
    > /src ia64.somedomain
    >
    > I have a NFS proxy for my user and the system account as:
    > VMS User_name Type User_ID Group_ID Host_name
    >
    > USER OND 40 2 ia64.somedomain
    > SYSTEM OND 1 4 ia64.somedomain
    >
    > I only have proxies setup on the Alpha, it is sometimes mentioned in
    > the documentation that proxies should be set up on the NFS-client side
    > as well, but I haven't figured out why that should be neccessary. Both
    > the NFS and PORTMAPPER server components are started on the Alpha and
    > on the Itanium only the NFS Client client component is started.
    >
    > I'm mounting the source-code-disk using the following command:
    > $tcpip mount src: src src: /path="/src" /host=alpha /structure=5 /
    > system
    >
    > OPCOM says:
    > %%%%%%%%%%% OPCOM 14-AUG-2008 13:58:59.15 %%%%%%%%%%%
    > Message from user TCPIP$NFS on ALPHA
    > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > -TCPIP-S-NFS_CLIENT, uid=40 gid=2 host_name = ia64.somedomain
    >
    > Now to my questions.
    >
    > 1) How can I get the same 'logical'/diskname on the client instead
    > of new DNFSn: at every mount ? (I've tried /PROCESSOR=SAMENFS20
    > without success)
    >
    > 2) Has anyone similar setup and is will to share setup hints etc ?
    >
    > 3) I've also been trying to mount the disk onto Linux but I always
    > get the 'permission' denied when accessing the mount. The mountpoint
    > on Linux looks like:
    > 'drwxr-x--x 2 nobody nogroup 512 2008-08-14 13:07 src/'
    >
    > I have had a proxy setup for my Linux user but the 'uid' has always
    > showed up as 0 in OPCOM.
    >
    > %%%%%%%%%%% OPCOM 13-AUG-2008 19:47:56.98 %%%%%%%%%%%
    > Message from user TCPIP$NFS on ALPHA
    > %TCPIP-S-NFS_MNTSUC, mounted file system /src
    > -TCPIP-S-NFS_CLIENT, uid=0 gid=1002 host_name = linux.somedomain
    >
    > USER OND 1001 1002
    > linux.somedomain
    >
    > From /etc/fstab
    > alpha:/src /mnt/alpha/src nfs
    > rw,user,rsize=8192,wsize=8192,nolock,proto=udp,har d,intr,nfsvers=3 0 0
    >
    > Has anyone a workaround for that ?
    >
    > 4) I've been reading chapter 22 (NFS Server) and chapter 23 (NFS
    > Client) atleast a dozen times, but I dont seem to be able to
    > understand the 'noproxy_id/noproxy_gid' stuff. I someone willing to
    > share some light on that.
    >
    > Regards
    > - Ingi


    Have you (and other folks replying so far) looked at and dismissed a
    "solution" based on host-based Infoserver technologies? Or is it
    something you're not aware of ? I don't know much about the host-based
    implementations myself, except to say that on the wire it is neither
    DECnet nor IP, it is just a non-routable LAN-based protocol used to
    serve (and access) block devices (cf NFS DFS etc which serve *files*).
    The Infoserver "local area disk" protocol is not very related to the
    clustering protocols. Once you've got a block level protocol of this
    nature your OS can layer whatever file system over it (and thus
    whatever security/authentication/etc) as is convenient; in your case
    the usual VMS stuff would seem appropriate. I don't know what
    resources are available (docs, howtos, etc) either, but if you're
    still stuck for ideas you could start at http://64.223.189.234/node/285
    and see where it leads.

    Apologies if this is an unhelpful idea.

  11. Re: NFS - OpenVMS to OpenVMS

    If the move from alpha to that IA64 thing is an "event", then you can
    NFS, copy files over, verify they were copied properly and not truncated
    (careful with text files).

    But if the two boxes are to co-exist for a long time, clustering would
    be THE solution because you could then very safely maintain a single
    shared directory structure that is native to VMS, would support any/all
    VMS file organisations without worries and more importantly, the shared
    lockling means that you can work on both the alpha and that IA64 thing
    at the same time.

+ Reply to Thread