cvm master failover - Veritas Cluster Server

This is a discussion on cvm master failover - Veritas Cluster Server ; Hello, I have a problem with cvm master failover. Configuration: hw: Sun Netra200, HBA Qlogic2200, Brocade 3800, HDS 9200. sw: Solaris8, SANPointFS HA 3.4, VCS2.0 + patch04 - 4 node cluster. I have configured ClusterFileSystem with shared disk group. Everything ...

+ Reply to Thread
Results 1 to 2 of 2

Thread: cvm master failover

  1. cvm master failover

    Hello,
    I have a problem with cvm master failover.
    Configuration:
    hw: Sun Netra200, HBA Qlogic2200, Brocade 3800, HDS 9200.
    sw: Solaris8, SANPointFS HA 3.4, VCS2.0 + patch04 - 4 node cluster.
    I have configured ClusterFileSystem with shared disk group. Everything
    looks/works fine but ...
    If I disconnect node (HBA/cable/switch port failure) which is primary for
    CFS I have all cvm service groups failed on all nodes. I expect that another
    nodes should became primary.
    If I disconnect node which is NOT primary node for CFS other nodes works
    fine.

    Anybody have experience with simillar problem?
    Any suggestions pls.


    thx and rgds

    Tomek




  2. Re: cvm master failover


    I see you opened a Support the next day, which guarantees proper followup
    on you issue.

    For others monitoring this group, issues such as this typically require more
    in-depth investigation. It helps to get a vxexplore from each node so we
    may more closely review the configuration.

    For expedience, you may retreive the utility here:

    ftp://ftp.veritas.com/pub/support/vxexplore.tar.Z

    extract, run, and return results to ftp.veritas.com:/incoming . Use the
    case number when prompted, and answer "no" to restarting vxconfigd in debug
    mode.

    Best Regards,

    "Tomasz Kilinski" wrote:
    >Hello,
    >I have a problem with cvm master failover.
    >Configuration:
    >hw: Sun Netra200, HBA Qlogic2200, Brocade 3800, HDS 9200.
    >sw: Solaris8, SANPointFS HA 3.4, VCS2.0 + patch04 - 4 node cluster.
    >I have configured ClusterFileSystem with shared disk group. Everything
    >looks/works fine but ...
    >If I disconnect node (HBA/cable/switch port failure) which is primary for
    >CFS I have all cvm service groups failed on all nodes. I expect that another
    >nodes should became primary.
    >If I disconnect node which is NOT primary node for CFS other nodes works
    >fine.
    >
    >Anybody have experience with simillar problem?
    >Any suggestions pls.
    >
    >
    >thx and rgds
    >
    >Tomek
    >
    >
    >



+ Reply to Thread