node and port alloclass, cannot add a node to the cluster - VMS

This is a discussion on node and port alloclass, cannot add a node to the cluster - VMS ; I've a 2node cluster and I'm having trouble adding a 3rd node. I've been reading the Cluster Systems manual for ages but some points are still a bit obscure. My cluster: +--------------------------------------------------------+-----------------+ | SYSTEMS | MEMBERS | +--------+--------------------------------+--------------+-------+---------+ | NODE ...

+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 20 of 38

Thread: node and port alloclass, cannot add a node to the cluster

  1. node and port alloclass, cannot add a node to the cluster

    I've a 2node cluster and I'm having trouble adding a 3rd node.
    I've been reading the Cluster Systems manual for
    ages but some points are still a bit obscure.

    My cluster:
    +--------------------------------------------------------+-----------------+
    | SYSTEMS | MEMBERS |
    +--------+--------------------------------+--------------+-------+---------+
    | NODE | HW_TYPE | SOFTWARE | VOTES | STATUS |
    +--------+--------------------------------+--------------+-------+---------+
    | OKAPI | AlphaServer DS10L 617 MHz | VMS V8.3 | 1 | MEMBER |
    | DONKEY | HP rx2620 (1.60GHz/3.0MB) | VMS V8.3 | 1 | MEMBER |
    +--------+--------------------------------+--------------+-------+---------+

    and I want to add a 3rd node, also ds10l.

    All nodes boot from MSA1000, OKAPI from $1$DGA1: and DONKEY from $1$DGA2:

    I set up the MSA LUNs via CLI, so the above disks names are as they
    appear in VMS.

    I understand that number "1" in the above disk names is a port
    allocation class. Is that correct?

    I assigned both nodes a node allocation class, also 1.
    Do I need to do this if the port allocation is already in use?

    I run DECnet-Plus on both nodes.

    Now I want to add a second alpha node (LLAMA) to the cluster, also to boot
    from $1$DGA1. I run CLUSTER_CONFIG.COM from OKAPI, and specify that the
    page and swap files for the new cluster member will be on its local disk.
    At which point the program tells me to boot up the new cluster member,
    and shows the message:

    Waiting for LLAMA to boot...

    Following are the screen outputs from 3 nodes:

    on LLAMA during booting
    *******************

    %CNXMAN, Sending VMScluster membership request to system DONKEY
    %CNXMAN, Now a VMScluster member -- system OKAPI
    %STDRV-I-STARTUP, OpenVMS startup begun at 7-SEP-2007 15:49:09.14
    %CNXMAN, Lost connection to system DONKEY
    %PEA0, Port has Closed Virtual Circuit - REMOTE NODE OKAPI

    %CNXMAN, Quorum lost, blocking activity

    %CNXMAN, Timed-out lost connection to system DONKEY
    %CNXMAN, Proposing reconfiguration of the VMScluster
    %CNXMAN, Removed from VMScluster system DONKEY
    %CNXMAN, Completing VMScluster state transition
    %PEA0, Port has Closed Virtual Circuit - REMOTE NODE DONKEY


    At this point DONKEY crashes:

    on DONKEY
    ***************

    $

    **** OpenVMS I64 Operating System V8.3 - BUGCHECK ****

    ** Bugcheck code = 000005DC: CLUEXIT, Node voluntarily exiting VMScluster
    ** Crash CPU: 00000000 Primary CPU: 00000000 Node Name: DONKEY
    ** Supported CPU count: 00000002
    ** Active CPUs: 00000000.00000003
    ** Current Process: NULL
    ** Current PSB ID: 00000001
    ** Image Name:

    After rebooting DONKEY it cannot form cluster with OKAPI

    on DONKEY after rebooting
    *******************************

    %CNXMAN, Lost connection to system OKAPI
    %PKB0, Copyright (c) 2001 LSI Logic, PKM V1.1.01
    %PKB0, SCSI Chip is LSI53C1030, Operating mode is LVD Ultra320 SCSI
    %PKB0, LSI53C1030 firmware version is 1.3.35.65
    %MSCPLOAD-I-CONFIGSCAN, enabled automatic disk serving
    %CNXMAN, Timed-out lost connection to system OKAPI
    %PEA0, Virtual Circuit Timeout - REMOTE NODE OKAPI

    %PEA0, Inappropriate SCA Control Message - FLAGS/OPC/STATUS/PORT 00/22/00/FE


    while LLAMA discovered DONKEY but still cannot connect

    on LLAMA after rebooting DONKEY
    *************************************

    %CNXMAN, Discovered system DONKEY
    %PEA0, Virtual Circuit Timeout - REMOTE NODE DONKEY

    %CNXMAN, Discovered system DONKEY
    %PEA0, Virtual Circuit Timeout - REMOTE NODE DONKEY

    %PEA0, Inappropriate SCA Control Message - FLAGS/OPC/STATUS/PORT 00/22/00/FE

    %PEA0, Virtual Circuit Timeout - REMOTE NODE DONKEY

    %PEA0, Virtual Circuit Timeout - REMOTE NODE DONKEY

    %PEA0, Virtual Circuit Timeout - REMOTE NODE DONKEY


    on OKAPI all this time
    ***********************************

    Waiting for LLAMA to boot...
    Waiting for LLAMA to boot...


    If I now shutdown LLAMA, the old 2node DONKEY-OKAPI cluster is formed

    on DONKEY after issuing RMC>reset on LLAMA
    *****************************************

    %PEA0, Virtual Circuit Timeout - REMOTE NODE OKAPI

    %CNXMAN, Established connection to system OKAPI
    %CNXMAN, Now a VMScluster member -- system DONKEY


    Two final questions. While runnig CLUSTER_CONFIG.COM I see these 2 warnings:

    WARNING: If the node being added is a voting member, EXPECTED_VOTES for
    every cluster member must be adjusted. For complete instructions
    check the section on configuring a cluster in the "OpenVMS Cluster
    Systems" manual.

    CAUTION: If this cluster is running with multiple system disks and
    common system files will be used, please, do not proceed
    unless appropriate logical names are defined for cluster
    common files in SYLOGICALS.COM. For instructions, refer to
    the "OpenVMS Cluster Systems" manual.

    I understand that I need to adjust EXPECTED_VOTES only after the new
    node is added successfully. Is that correct?

    I understand that "multiple system disks" means multiple system
    disks for the same architecture, i.e. 2 alpha system disks, or 3 i64
    system disks. As I only have a single alpha and a single I64 disk I
    presume I don't have "multiple system disks". Is that correct?

    many thanks
    anton

    --
    Anton Shterenlikht
    Room 2.6, Queen's Building
    Mech Eng Dept
    Bristol University
    University Walk, Bristol BS8 1TR, UK
    Tel: +44 (0)117 928 8233
    Fax: +44 (0)117 929 4423

  2. Re: node and port alloclass, cannot add a node to the cluster

    In article <20070907155014.GA46122@mech-aslap33.men.bris.ac.uk>, Anton Shterenlikht writes:
    >
    > %CNXMAN, Lost connection to system DONKEY


    Running through all the messages it really looks like you've got
    a network problem.


    > CAUTION: If this cluster is running with multiple system disks and
    > common system files will be used, please, do not proceed
    > unless appropriate logical names are defined for cluster
    > common files in SYLOGICALS.COM. For instructions, refer to
    > the "OpenVMS Cluster Systems" manual.
    >
    > I understand that I need to adjust EXPECTED_VOTES only after the new
    > node is added successfully. Is that correct?


    I'd set the proper number of VOTES and EXPECTED_VOTES once you
    decide what they are. In your case 1 each and 3, or 1 per boot
    disk and 2.

    > I understand that "multiple system disks" means multiple system
    > disks for the same architecture, i.e. 2 alpha system disks, or 3 i64
    > system disks. As I only have a single alpha and a single I64 disk I
    > presume I don't have "multiple system disks". Is that correct?


    You do have multiple system disks as far as the logical names refered
    to in the above message are concerned. For example, there should
    only be one SYSUAF and all three nodes should have a logical name
    pointing to it, unless it just happens to be findable in sys$system:
    on that node. So if you put it in sys$common:[sysexe] on the disk
    the Alphas share then you only really need to define that on the
    IA64.



  3. Re: node and port alloclass, cannot add a node to the cluster

    Bob Koehler wrote:
    > In article <20070907155014.GA46122@mech-aslap33.men.bris.ac.uk>, Anton Shterenlikht writes:
    >
    >>%CNXMAN, Lost connection to system DONKEY

    >
    >
    > Running through all the messages it really looks like you've got
    > a network problem.
    >
    >
    >
    >> CAUTION: If this cluster is running with multiple system disks and
    >> common system files will be used, please, do not proceed
    >> unless appropriate logical names are defined for cluster
    >> common files in SYLOGICALS.COM. For instructions, refer to
    >> the "OpenVMS Cluster Systems" manual.
    >>
    >>I understand that I need to adjust EXPECTED_VOTES only after the new
    >>node is added successfully. Is that correct?

    >
    >
    > I'd set the proper number of VOTES and EXPECTED_VOTES once you
    > decide what they are. In your case 1 each and 3, or 1 per boot
    > disk and 2.
    >
    >
    >>I understand that "multiple system disks" means multiple system
    >>disks for the same architecture, i.e. 2 alpha system disks, or 3 i64
    >>system disks. As I only have a single alpha and a single I64 disk I
    >>presume I don't have "multiple system disks". Is that correct?

    >
    >
    > You do have multiple system disks as far as the logical names refered
    > to in the above message are concerned. For example, there should
    > only be one SYSUAF and all three nodes should have a logical name
    > pointing to it, unless it just happens to be findable in sys$system:
    > on that node. So if you put it in sys$common:[sysexe] on the disk
    > the Alphas share then you only really need to define that on the
    > IA64.
    >


    While what Bob says is correct, but I'm pretty sure it's not your
    problem. You'll end up with multiple SYSUAF's, multiple queue databases,
    etc., but that's actually legitimate in some cases (non-homogeneous
    cluster.) And it won't keep the cluster from forming.

    You mention a pagefile on the local disk on the new DS10L. Do you
    have local disks on more than one node? If so, you want the allocation
    classes to be *different* on each node, unless the "local" disks are
    actually on a shared SCSI bus. If you have 2 $1$dka0:'s because
    both alphas (or all 3 systems) have a local SCSI disk named DKA0:,
    you will have problems, and quite possibly the cluster won't form
    or one of the nodes will get booted out as soon as it tries to
    access one of the colliding disks.

    There are three reasons to have allocation classes: 1) shadowing
    requires them, 2) to prevent colliding disk names so you can serve
    them to the other systems and 3) to make sure and disks on a shared
    SCSI bus have the same name when viewed from any of the systems.
    If you have allocation class set to 0, then the local disks on
    node HORSE will be called HORSE$DKcnnn:, and the local disks on
    node ZEBRA will be ZEBRA$DKcnnn:, which will work fine unless you
    want to shadow, or connect two of the local SCSI buses together.

    If instead you assign allocation classes to each system, you
    want them to be different. I.E. HORSE has 255, ZEBRA has 254,
    etc. Then HORSE's local disk names are $255$DKcnnn: and
    ZEBRA's disks are $254$DKcnnn:, no collisions in the name
    space, and now you can shadow. However, shared SCSI buses
    will not work.

    To use shared SCSI buses, all the disks on a given bus must
    have the same name when viewed from any host on that bus.
    So you can either give all the systems the SAME allocation
    class (but again break non-shared disks, unless they all
    happen to have different controller letters and/or unit
    numbers, maybe), or the best solution is to give each bus
    a port allocation class that is the same on all systems.
    In this case, the bus can be connected to different
    controllers on each system, i.e. the "A" bus on one and
    the "B" bus on another, since with port allocation classes,
    VMS uses a fake device name that is always controller "A".
    For example, if the DKBnnn: bus on one system is connected
    to the DKCnnn: bus on the other, then if the PKB: controller
    on the first system and the PKC: controller on the second
    are both assigned port allocation class 3, the disks will
    appear as $3$DKAnnn: on both systems. This makes configuring
    much more flexible.

    SAN disks, such as MSA1000's, always show up as $1$DGAnnnn:,
    (always allocation class 1, always device name DG, always
    controller "A".) They act kind of like a shared SCSI bus
    with port allocation class 1 on all systems. So you don't
    have to worry about name collisions there, but you do have
    to make sure all the unit id's are different if you have
    more than 1 SAN array.

    (SAN tapes are always allocation class 2, i.e. $2$MGAnnnn

    What this all boils down to is you need a set of rules for
    preventing name collisions for different devices and ensuring
    names are the same for shared devices.

    Many people do this by assigning each host a different
    allocation class, counting down from 255. Then they
    assign each shared SCSI bus a port allocation class, counting
    up from 3. (Skipping 1 and 2 used by the MSA1000 or other
    SAN arrays.)

    Or you could Google back to a fairly recent post where someone
    else explained all this, probably much better and including some
    edge cases I've forgotten about....

    HTH

    --
    John Santos
    Evans Griffiths & Hart, Inc.
    781-861-0670 ext 539

  4. Re: node and port alloclass, cannot add a node to the cluster

    On Sat, Sep 08, 2007 at 07:22:13AM +0000, John Santos wrote:
    > Bob Koehler wrote:
    > >
    > > You do have multiple system disks as far as the logical names refered
    > > to in the above message are concerned. For example, there should
    > > only be one SYSUAF and all three nodes should have a logical name
    > > pointing to it, unless it just happens to be findable in sys$system:
    > > on that node. So if you put it in sys$common:[sysexe] on the disk
    > > the Alphas share then you only really need to define that on the
    > > IA64.
    > >

    >
    > While what Bob says is correct, but I'm pretty sure it's not your
    > problem. You'll end up with multiple SYSUAF's, multiple queue databases,
    > etc., but that's actually legitimate in some cases (non-homogeneous
    > cluster.) And it won't keep the cluster from forming.


    Bob, John, many thanks.

    Perhaps I don't get the very basics, so apologies if my questions are
    obvious or stupid.

    For instance, I've a 2node Alpha-I64 cluster even though I never
    run CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM. Somehow the cluster
    was formed, at least according to the startup messages and to the
    outputs of SHOW CLUSTER or SHOW DEVICES commands. Neither did I
    define any clusterwide logical names in SYLOGCALS.COM or elsewhere.
    How could the cluster be formed? Is it really formed?

    I now understand that actually one can only add a node of the same
    architecture as that of a node on which CLUSTER_CONFIG.COM is run.
    So is CLUSTER_CONFIG.COM irrelevant to a 2node Alpha-I64 cluster?

    The Cluster Systems manual recommends all clusterwide logical names
    except LMF$LICENSE, NET$PROXY, and VMS$OBJECTS to be defined
    in SYSTARTUP_VMS.COM. However, on my alpha and I64 they are all
    defined in SYLOGICALS.TEMPLATE. Is there not a contradiction?
    Or is that a minor point? The
    manual explains the difference: "OpenVMS will ensure that the clusterwise
    database has been initialized before SYSTARTUP_VMS.COM is executed."

    I understood from Bob's reply that if I put the core system files
    like e.g. SYSUAF.COM in SYS$COMMON:[SYSEXE] on one node, then I only need
    to define the clusterwide logical names on the other node. Is that
    correct? Does it matter which node?

    Simply having clusterwide logical names does not prevent from having
    system files with different data on each node, or is that not a problem?

    Can I think of VMS logical names as analogous to UNIX links? In other
    words, if I have a system file on one node and define a clusterwide
    logical name for this file, can I delete this file from all other
    nodes in a cluster?

    Why do the definitions in SYLOGICALS.TEMPLATE have /SYSTEM qualifiers:

    $! DEFINE/SYSTEM/EXECUTIVE SYSUAF SYS$SYSTEM:SYSUAF.DAT
    $! DEFINE/SYSTEM/EXECUTIVE SYSUAFALT SYS$SYSTEM:SYSUAFALT.DAT
    $! DEFINE/SYSTEM/EXECUTIVE SYSALF SYS$SYSTEM:SYSALF.DAT

    Wouldn't /CLUSTER_SYSTEM be more appropriate?

    What other manuals besides Cluster Systems and System Manager's (vol 1 and 2)
    can I refer to for clarification?

    thanks a lot
    anton

    --
    Anton Shterenlikht
    Room 2.6, Queen's Building
    Mech Eng Dept
    Bristol University
    University Walk, Bristol BS8 1TR, UK
    Tel: +44 (0)117 928 8233
    Fax: +44 (0)117 929 4423

  5. Re: node and port alloclass, cannot add a node to the cluster

    Anton Shterenlikht wrote:
    > For instance, I've a 2node Alpha-I64 cluster even though I never
    > run CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM. Somehow the cluster
    > was formed,



    For a node to join an existing cluster, it needs to have the right
    cluster_authorize.dat file in its sys$system: as well as having its one
    proper SYSGEN parameters to enable the clustering code.

    If the node is a satellite, it needs to be defined in the LANCP so its
    MOP requests can be answered, and it needs to have its own system root
    [SYSx.]. It is possible to set those up manually, but not recommended.

    > I now understand that actually one can only add a node of the same
    > architecture as that of a node on which CLUSTER_CONFIG.COM is run.


    If you add a standalone node (with its own system disk), you can run
    CLUSTER_CONFIG on that node. You'll be prompted for the cluster
    information (group and password) and that procedure will create a local
    CLUSTER_AUTHORIZE.DAT and set the proper SYSGEN parameters.

    When you add a satellite node, you run cluster_config on the boot node
    to define the satellite's parameters ( ethernet address, root name, node
    name etc). It then creates the [.SYSx...] structure along with the alias
    to VMS$COMMON.DIR and populates it with a basic SYSGEN/MODPARAMS data.


    Logical names are then defined by the system manager based on how he
    wants his cluster to operate. (shared queues, shared SYSUAF etc etc).


    >However, on my alpha and I64 they are all
    > defined in SYLOGICALS.TEMPLATE. Is there not a contradiction?


    SYLOGICALS.TEMPLATE does not get executed. You are not supposed to
    modify the .TEMPLATE file. You are supposed to use it as "inspiration"
    to populate SYLOGICALS.COM which is the one that gets executed early in
    the boot stages.


    > I understood from Bob's reply that if I put the core system files
    > like e.g. SYSUAF.COM in SYS$COMMON:[SYSEXE] on one node, then I only need
    > to define the clusterwide logical names on the other node. Is that
    > correct? Does it matter which node?


    It depends on your environment. If SYSUAF.DAT resides on a disk that is
    accessible only from NODE-A, consider the implications of NODE-B
    booting first and NODE-A remaining offline. If NODE-B defines SYSUAF to
    point to a non-existing device (since NODE-A hasn't booted), then anyone
    can access NODE-B without a password.

    So only the node(s) who have direct access to the files (and who can
    serve it to nodes not having direct access) should define such logicals,
    and when other nodes join the cluster, the cluster-wide table gets
    copied and they automatically get the definitions for SYSUAF etc.

    When a node boots and the central SYSUAF.DAT is not available, it should
    default to defining no logical name and having a minimally populated
    SYSUAF.DAT file in its SYS$SYSTEM to provide access to at least the
    system manager until the main node boots and provides access to te
    central SYSUAF as well as defining the clusterwide logical, after which,
    those other system automatically start to access the central one instead
    of the minimalist local one.

    > Why do the definitions in SYLOGICALS.TEMPLATE have /SYSTEM qualifiers:
    > Wouldn't /CLUSTER_SYSTEM be more appropriate?


    Clusterwide logicals are a relatively recent addition of VMS (7.2 if I
    remember correctly). There is not yet a simple /CLUSTER_SYSTEM
    qualifier. Also, note that it is possible to create cluster-wide
    loggical name tables that are not "SYSTEM" (aka: think about group
    logical name tables that propagate across nodes.).

  6. Re: node and port alloclass, cannot add a node to the cluster

    JF Mezei writes:
    > Anton Shterenlikht wrote:


    >> Why do the definitions in SYLOGICALS.TEMPLATE have /SYSTEM qualifiers:
    >> Wouldn't /CLUSTER_SYSTEM be more appropriate?


    > Clusterwide logicals are a relatively recent addition of VMS (7.2 if I
    > remember correctly). There is not yet a simple /CLUSTER_SYSTEM
    > qualifier. Also, note that it is possible to create cluster-wide
    > loggical name tables that are not "SYSTEM" (aka: think about group
    > logical name tables that propagate across nodes.).


    The lesson to be taken from the above post is not to trust JF to deliver
    accurate and timely technical information

    $ sho sys/noproc
    OpenVMS V8.3 on node CUEBID 18-SEP-2007 12:48:34.55 Uptime 33 12:34:33

    $ help define /clus

    DEFINE

    /CLUSTER_SYSTEM

    You must be signed in to the SYSTEM account or have SYSNAM
    (system logical name) or SYSPRV (system) privilege to use this
    qualifier.

    Defines a clusterwide logical name in the LNM$SYSCLUSTER table.

    --

    Rob Brooks MSL -- Nashua brooks!cuebid.zko.hp.com

  7. Re: node and port alloclass, cannot add a node to the cluster

    Rob Brooks wrote:
    > The lesson to be taken from the above post is not to trust JF to deliver
    > accurate and timely technical information


    Thank you for your vote of confidence. Very appreciated. Please
    disregard my previous post completely since none of it was usable.


    > $ sho sys/noproc
    > OpenVMS V8.3 on node CUEBID 18-SEP-2007 12:48:34.55 Uptime 33 12:34:33
    >
    > $ help define /clus


    Well, so they finally snuck this one in. But it is still unsusable in
    mixed architecture clusters since procedures using this new qualifier
    will bomb when running on VAX since HP failed to honour its "plan of
    record" to deliver an 8.* version of VAX-VMS.

    If you have no VAXen left in your shop, then you can use it to your
    heart.s content along with all the new gadgets that came with 8.3, but
    if you still have one or more VAXes, you need to be careful about using
    features that are not available on VAX in procedures that may be running
    on a VAX. (consider SYSMAN with SET ENV/CLUSTER to run procedures, so a
    procedure residing on an ALPHA might be tasked to run on a VAX without
    the system manager realising the procedure he will invoke uses
    alpha-only semantics.

  8. Re: node and port alloclass, cannot add a node to the cluster

    Anton Shterenlikht writes:

    >For instance, I've a 2node Alpha-I64 cluster even though I never
    >run CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM. Somehow the cluster
    >was formed, at least according to the startup messages and to the
    >outputs of SHOW CLUSTER or SHOW DEVICES commands. Neither did I
    >define any clusterwide logical names in SYLOGCALS.COM or elsewhere.
    >How could the cluster be formed? Is it really formed?


    If the SYSGEN parameter VAXCLUSTER is set right, the system will come up
    as a cluster. Whether it's a _usable_ cluster depends on other SYSGEN
    parameters, the existence and contents of CLUSTER_AUTHORIZE.DAT and other
    things. Often a misconfigured "cluster" will work just fine as a "cluster"
    of one node, but you won't get it working right when trying to add another
    node.

    >I now understand that actually one can only add a node of the same
    >architecture as that of a node on which CLUSTER_CONFIG.COM is run.
    >So is CLUSTER_CONFIG.COM irrelevant to a 2node Alpha-I64 cluster?


    Run CLUSTER_CONFIG on each node, making sure to specify the same or
    compatible settings (esp. the group and password in CLUSTER_AUTHORIZE.DAT)
    If you're running as a cluster but unsure how you got there, you may
    want to start over by cleaning out MODPARAMS.DAT, rebooting with
    VAXCLUSTER=0, and running CLUSTER_CONFIG to form a new cluster. You'll
    have to plan things such as how to make the disk with SYSUAF etc.
    available to all members even if not all are up etc.

    >The Cluster Systems manual recommends all clusterwide logical names
    >except LMF$LICENSE, NET$PROXY, and VMS$OBJECTS to be defined
    >in SYSTARTUP_VMS.COM. However, on my alpha and I64 they are all
    >defined in SYLOGICALS.TEMPLATE. Is there not a contradiction?


    You are supposed to make your own SYLOGICALS.COM, using the .TEMPLATE as
    a starting point if you want.

    >I understood from Bob's reply that if I put the core system files
    >like e.g. SYSUAF.COM in SYS$COMMON:[SYSEXE] on one node, then I only need
    >to define the clusterwide logical names on the other node. Is that
    >correct? Does it matter which node?


    If you want SYSUAF to reside somewhere other than what equates to
    SYS$COMMON:[SYSEXE]SYSUAF.DAT, you'll have to define the logical,
    cluster or no cluster.

    >Simply having clusterwide logical names does not prevent from having
    >system files with different data on each node, or is that not a problem?


    You have to make all the logicals consistent, all point to the same place!

    >Why do the definitions in SYLOGICALS.TEMPLATE have /SYSTEM qualifiers:


    >$! DEFINE/SYSTEM/EXECUTIVE SYSUAF SYS$SYSTEM:SYSUAF.DAT
    >$! DEFINE/SYSTEM/EXECUTIVE SYSUAFALT SYS$SYSTEM:SYSUAFALT.DAT
    >$! DEFINE/SYSTEM/EXECUTIVE SYSALF SYS$SYSTEM:SYSALF.DAT


    >Wouldn't /CLUSTER_SYSTEM be more appropriate?


    /CLUSTER_SYSTEM is quite new. /SYSTEM is nodewide, and must be done
    on each node. Again, each node's definition must be consistent with the
    others' (and reality).

    For example, perhaps you want all your clusterwide stuff like SYSUAF to
    reside on shadowset DSA1:. You'd want to do something like:
    $ DEFINE/SYSTEM/EXECUTIVE SYSUAF DSA1:[SYSTEM]SYSUAF.DAT
    on EACH node. Same for other things. Of course, you'll have to
    $ MOUNT/SYSTEM DSA1: first, as well.

    -Mike

  9. Re: node and port alloclass, cannot add a node to the cluster

    > Also, note that it is possible to create cluster-wide
    > loggical name tables that are not "SYSTEM" (aka: think about group
    > logical name tables that propagate across nodes.).


    You can do it. I have done it. You define a table whose parent table
    is the cluster table. Details are left as an exercise, but if anyone is
    really curious ask and I'll post the code.


  10. Re: node and port alloclass, cannot add a node to the cluster

    Anton Shterenlikht wrote:
    > [snip]
    > All nodes boot from MSA1000, OKAPI from $1$DGA1: and DONKEY from $1$DGA2:
    >
    > I set up the MSA LUNs via CLI, so the above disks names are as they
    > appear in VMS.
    >
    > I understand that number "1" in the above disk names is a port
    > allocation class. Is that correct?


    No. Fibre-channel disks always get ALLOCLASS 1. Fibre channel tapes always get
    ALLOCLASS 2.

    Port allocation classes are assigned in SYS$SYSTEM:SYS$DEVICES.DAT like so:

    [Port MYNODE$PKA]
    allocation class = 10

    ...., for example, will cause all the direct-attached SCSI drives (including HSZ
    units) to be $10$DKAu:

    > I assigned both nodes a node allocation class, also 1.
    > Do I need to do this if the port allocation is already in use?


    If you intend to use HBVS, it may be a requirement that the node's ALLOCLASS is
    non-zero. I'd use something other than 1 or 2, myself.

    > I run DECnet-Plus on both nodes.


    YYYYEEECCCHHHH!

    --
    David J Dachtera
    dba DJE Systems
    http://www.djesys.com/

    Unofficial OpenVMS Marketing Home Page
    http://www.djesys.com/vms/market/

    Unofficial Affordable OpenVMS Home Page:
    http://www.djesys.com/vms/soho/

    Unofficial OpenVMS-IA32 Home Page:
    http://www.djesys.com/vms/ia32/

    Unofficial OpenVMS Hobbyist Support Page:
    http://www.djesys.com/vms/support/

  11. Re: node and port alloclass, cannot add a node to the cluster

    HBVS does not require ALLOCLASS to be non-zero. It requires the
    participating disk devices to have a non-zero allocation class. In an
    environment with only FC attached disks, setting ALLOCLASS to zero on
    all systems has the advantage that, for instance, local attached CD or
    tape drives get a name like $DQA0:, which is more descriptive
    than $n$DQA0:

    Bart Zorn

    On Sep 20, 2:43 am, David J Dachtera
    wrote:
    > Anton Shterenlikht wrote:
    > > [snip]
    > > All nodes boot from MSA1000, OKAPI from $1$DGA1: and DONKEY from $1$DGA2:

    >
    > > I set up the MSA LUNs via CLI, so the above disks names are as they
    > > appear in VMS.

    >
    > > I understand that number "1" in the above disk names is a port
    > > allocation class. Is that correct?

    >
    > No. Fibre-channel disks always get ALLOCLASS 1. Fibre channel tapes always get
    > ALLOCLASS 2.
    >
    > Port allocation classes are assigned in SYS$SYSTEM:SYS$DEVICES.DAT like so:
    >
    > [Port MYNODE$PKA]
    > allocation class = 10
    >
    > ..., for example, will cause all the direct-attached SCSI drives (including HSZ
    > units) to be $10$DKAu:
    >
    > > I assigned both nodes a node allocation class, also 1.
    > > Do I need to do this if the port allocation is already in use?

    >
    > If you intend to use HBVS, it may be a requirement that the node's ALLOCLASS is
    > non-zero. I'd use something other than 1 or 2, myself.
    >
    > > I run DECnet-Plus on both nodes.

    >
    > YYYYEEECCCHHHH!
    >
    > --
    > David J Dachtera
    > dba DJE Systemshttp://www.djesys.com/
    >
    > Unofficial OpenVMS Marketing Home Pagehttp://www.djesys.com/vms/market/
    >
    > Unofficial Affordable OpenVMS Home Page:http://www.djesys.com/vms/soho/
    >
    > Unofficial OpenVMS-IA32 Home Page:http://www.djesys.com/vms/ia32/
    >
    > Unofficial OpenVMS Hobbyist Support Page:http://www.djesys.com/vms/support/




  12. change password on node 1 from node 2

    My password expired on node 1 in a vms cluster, so I cannot connect
    with ssh (old problem). I can connect to node 2 using system account.
    Can I change a system or an ordinary user password on node 1 from node 2?

    I tried SYSMAN, but cannot see how to do it.
    I'm also thinking about copying SYSUAF.DAT from node 1 to node 2,
    running AUTHORIZE on it to change the password, and then sending it
    back to node 1. Is that a good idea?

    I can, of course, do it from the console, if everytinkg else fails.

    many thanks
    anton

    --
    Anton Shterenlikht
    Room 2.6, Queen's Building
    Mech Eng Dept
    Bristol University
    University Walk, Bristol BS8 1TR, UK
    Tel: +44 (0)117 928 8233
    Fax: +44 (0)117 929 4423

  13. SOLVED: Re: change password on node 1 from node 2

    On Fri, Apr 11, 2008 at 01:33:05PM +0100, Anton Shterenlikht wrote:
    > My password expired on node 1 in a vms cluster, so I cannot connect
    > with ssh (old problem). I can connect to node 2 using system account.
    > Can I change a system or an ordinary user password on node 1 from node 2?
    >
    > I tried SYSMAN, but cannot see how to do it.
    > I'm also thinking about copying SYSUAF.DAT from node 1 to node 2,
    > running AUTHORIZE on it to change the password, and then sending it
    > back to node 1. Is that a good idea?
    >
    > I can, of course, do it from the console, if everytinkg else fails.


    ssh from node 1 to node 2 worked.

    --
    Anton Shterenlikht
    Room 2.6, Queen's Building
    Mech Eng Dept
    Bristol University
    University Walk, Bristol BS8 1TR, UK
    Tel: +44 (0)117 928 8233
    Fax: +44 (0)117 929 4423

  14. Re: change password on node 1 from node 2

    On Fri, Apr 11, 2008 at 8:33 AM, Anton Shterenlikht wrote:
    > My password expired on node 1 in a vms cluster, so I cannot connect
    > with ssh (old problem). I can connect to node 2 using system account.
    > Can I change a system or an ordinary user password on node 1 from node 2?


    If your cluster is running with a common sysuaf, then you can change
    the password from any node in the cluster and it will be changed on
    all nodes.

    >
    > I tried SYSMAN, but cannot see how to do it.
    > I'm also thinking about copying SYSUAF.DAT from node 1 to node 2,
    > running AUTHORIZE on it to change the password, and then sending it
    > back to node 1. Is that a good idea?


    No that's a bad idea. If someone else changes their password or
    anything else in node 1's sysuaf between the time you copy it to node
    2 and then back to node 1 that change is lost.

    If you want to use sysman, here's one why to do it if you have a
    logical name pointing at the sysuaf.dat:

    $ pipe write sys$output "do mc authorize mod
    /pass=/nopwdexp" | sysman set env/node=node1

    Replace the words within the "<>" with the appropriate strings.

    Ken

  15. Re: change password on node 1 from node 2

    hi hanton

    $ assign node2::sys$system:sysuaf.dat sysuaf
    $ mc authorize mod .....

    best regards

    "Anton Shterenlikht" a écrit dans le message de news:
    20080411123305.GA1876@mech-aslap33.men.bris.ac.uk...
    > My password expired on node 1 in a vms cluster, so I cannot connect
    > with ssh (old problem). I can connect to node 2 using system account.
    > Can I change a system or an ordinary user password on node 1 from node 2?
    >
    > I tried SYSMAN, but cannot see how to do it.
    > I'm also thinking about copying SYSUAF.DAT from node 1 to node 2,
    > running AUTHORIZE on it to change the password, and then sending it
    > back to node 1. Is that a good idea?
    >
    > I can, of course, do it from the console, if everytinkg else fails.
    >
    > many thanks
    > anton
    >
    > --
    > Anton Shterenlikht
    > Room 2.6, Queen's Building
    > Mech Eng Dept
    > Bristol University
    > University Walk, Bristol BS8 1TR, UK
    > Tel: +44 (0)117 928 8233
    > Fax: +44 (0)117 929 4423
    >




  16. Re: change password on node 1 from node 2

    On Fri, Apr 11, 2008 at 03:06:07PM +0200, Raf The Cat wrote:
    > hi hanton
    >
    > $ assign node2::sys$system:sysuaf.dat sysuaf
    > $ mc authorize mod .....


    mc is not a DCL command, is it? I can't find any info on mc.

    thanks
    anton

    --
    Anton Shterenlikht
    Room 2.6, Queen's Building
    Mech Eng Dept
    Bristol University
    University Walk, Bristol BS8 1TR, UK
    Tel: +44 (0)117 928 8233
    Fax: +44 (0)117 929 4423

  17. Re: change password on node 1 from node 2

    On Fri, Apr 11, 2008 at 12:00 PM, Anton Shterenlikht
    wrote:
    > On Fri, Apr 11, 2008 at 03:06:07PM +0200, Raf The Cat wrote:
    > > hi hanton
    > >
    > > $ assign node2::sys$system:sysuaf.dat sysuaf
    > > $ mc authorize mod .....

    >
    > mc is not a DCL command, is it? I can't find any info on mc.


    "mc" is short for "mcr" which is a throwback to the old RSX emulator
    days and still works to invoke programs.

    Doing:

    $ auth :== $authorize
    $ auth s/br *

    and

    $ mc authorize s/br *

    will do the same thing (assuming you've defined the sysuaf logical
    name to point to your sysuaf.dat file or you're running the command
    from the directory where your sysuaf.dat file resides.)

    Ken

  18. Re: change password on node 1 from node 2

    Anton Shterenlikht wrote:
    > On Fri, Apr 11, 2008 at 03:06:07PM +0200, Raf The Cat wrote:
    >> hi hanton
    >>
    >> $ assign node2::sys$system:sysuaf.dat sysuaf
    >> $ mc authorize mod .....

    >
    > mc is not a DCL command, is it? I can't find any info on mc.
    >
    > thanks
    > anton
    >


    Try "MCR". "MC" is just an abbreviation. MCR was an acronym for
    Monitor Console Routine. It came from one or more of the PDP-11
    operating systems. In recent versions of VMS, say 4.0 and later, it's a
    synonym for RUN SYS$SYSTEM:

    --
    Here, there be dragons.

  19. Re: change password on node 1 from node 2

    On Fri, Apr 11, 2008 at 12:53 PM, Richard B. Gilbert
    wrote (in part):

    > Try "MCR". "MC" is just an abbreviation. MCR was an acronym for Monitor
    > Console Routine. It came from one or more of the PDP-11 operating systems.
    > In recent versions of VMS, say 4.0 and later, it's a synonym for RUN
    > SYS$SYSTEM:


    No, it's not a synonym for RUN SYS$SYSTEM. When you use the MCR
    command to invoke a program, you can pass parameters to the program on
    the command line. You can not do that with the RUN command. It is more
    of a shortcut for defining a foreign command.

    Ken

  20. Re: change password on node 1 from node 2

    Ken Robinson wrote:
    > On Fri, Apr 11, 2008 at 12:53 PM, Richard B. Gilbert
    > wrote (in part):
    >
    >> Try "MCR". "MC" is just an abbreviation. MCR was an acronym for Monitor
    >> Console Routine. It came from one or more of the PDP-11 operating systems.
    >> In recent versions of VMS, say 4.0 and later, it's a synonym for RUN
    >> SYS$SYSTEM:

    >
    > No, it's not a synonym for RUN SYS$SYSTEM. When you use the MCR
    > command to invoke a program, you can pass parameters to the program on
    > the command line. You can not do that with the RUN command. It is more
    > of a shortcut for defining a foreign command.
    >
    > Ken


    c/synonym/substitute/

    It saves some typing!

+ Reply to Thread
Page 1 of 2 1 2 LastLast