Running circuiton multiple nodes in a cluster - VMS

This is a discussion on Running circuiton multiple nodes in a cluster - VMS ; First, I'd like to thank everyone who responded to my prior posting, when I was having trouble getting the circuitcheck channel working. It turns out that, mostly, I just needed the knowledge that it was working elsewhere, and then I ...

+ Reply to Thread
Results 1 to 3 of 3

Thread: Running circuiton multiple nodes in a cluster

  1. Running circuiton multiple nodes in a cluster

    First, I'd like to thank everyone who responded to my
    prior posting, when I was having trouble getting the
    circuitcheck channel working. It turns out that, mostly,
    I just needed the knowledge that it was working elsewhere,
    and then I found a few typing and editing errors that I'd
    made.



    The purpose of this posting is to solicit advice on how
    to configure circuitcheck in a cluster. Right now, I've
    done a "pmdf start circ" on only one node.

    My goal is to allow the cluster to continue to monitor
    email being looped through other systems, independent of
    which nodes in the cluster might happen to be down.

    My first thought was to just run
    $ pmdf startup circuit
    on all nodes, as part of the site-specific pmdf startup.

    I tried that, but stopped it as it seemed it wasn't working.
    "pmdf circ /show" on some nodes indicated no probe messages
    were being received back.

    Before I try to trouble shoot further, I wanted to make sure
    that I'm going in the right direction.

    What I did produced a "PMDF circuit" process on each node,
    each keeping track of things in its own file:
    pmdf_table:circuitcheck_results_.dat

    Each node was generating messages, all of which were then
    coming back to a single circuitcheck channel on the cluster.


    [Aside: I don't understand the utility of having
    per-node files. It would seem there should
    be just a single file, shared by mulitple
    processes, with appropriate locking.]


    Should I have a separate circuitcheck channel for each
    cluster node, with the slave channels running in node-
    specific batch queues?

    To do that, I'd need to have each circuitcheck process using
    a different circuitcheck.cnf. If that's incorporated into
    the compiled configuration, I think that means I'll need to
    run with node-specific pmdf_table directories.

    Partly because I've never done that, this seems like its
    getting way too complicated for the fairly modest desire to
    avoid having circuitcheck dependent on a single cluster
    node.

    Hopefully I'm missing something basic...

    - Bob

  2. RE: Running circuiton multiple nodes in a cluster

    Bob Tinkelman wrote:

    > [Aside: I don't understand the utility of having
    > per-node files. It would seem there should
    > be just a single file, shared by mulitple
    > processes, with appropriate locking.]
    >


    One of the nicest I/O features in VMS is the shared log
    file. That's done with something like (DCL example)

    OPEN /APPEND /SHARE=WRITE ...

    RMS I/O to this style of sequential file will add
    each new record at the end of the file, ideal for a
    shared log file (or similar). RMS handles all of the
    locking automatically.

    ==== on node specific directories ======


    I've had more than one issue (at least, with Multinet)
    where I've disagreed with the "Process" way of doing
    things. There's a lot of power (and room for some
    fairly subtle errors) in the VMS system shared
    directory, as well as the Multinet shared root
    structure (which accomplishes the same thing,
    differently).

    Unfortunately, the PMDF directory tree, probably
    from the original days, does not have a cluster
    node specific component (at least, as far as I
    know).

    Carl Friedberg
    friedberg@esb.com
    www.esb.com
    The Elias Book of Baseball Records
    2008 Edition


  3. RE: Running circuiton multiple nodes in a cluster

    > Unfortunately, the PMDF directory tree, probably
    > from the original days, does not have a cluster
    > node specific component (at least, as far as I
    > know).


    On the contrary, I've had a setup using PMDF_COMMON and PMDF_SPECIFIC in place
    for at least 15 years, perhaps longer. Wprks fine and the changes to
    PMDF_STARTUP to support it are minimal. You do have to be careful when
    upgrading to restore your changes and make sure that you don't have anything
    silly in a node-specific directory, but the latter at least is always
    a concern when doing this.

    Although I've never done it this way myself, you could probably implement it by
    putting a wrapper around PMDF_STARTUP.

    Ned

+ Reply to Thread