IPMP and IPMultiNICB - Veritas Cluster Server

This is a discussion on IPMP and IPMultiNICB - Veritas Cluster Server ; Hi, Could someone give me some help regarding the following issue. I'm configuring two nodes in a vcs 4.0 cluster, and I want to use MultiNICB in conjunction with IPMultiNICB and IPMP (solaris). It should be really easy, but I ...

+ Reply to Thread
Results 1 to 14 of 14

Thread: IPMP and IPMultiNICB

  1. IPMP and IPMultiNICB


    Hi,
    Could someone give me some help regarding the following issue.
    I'm configuring two nodes in a vcs 4.0 cluster, and I want to use MultiNICB
    in conjunction with IPMultiNICB and IPMP (solaris). It should be really easy,
    but I just don't get it. I know that I have to plumb and configure my NIC's
    through hostname.*, and then configure the multinic resources in VCS. What
    I don't understand is the following. In the hostname.* files, you need to
    address the logical ip-address, as you would with plain IPMP without vcs-clustering.
    That's fine until I need to configure the hostname.* files on my second server.
    Should I use the same "failover-address" as the first server. Won't this
    result in two servers trying to be the same ip-address. As you can see I'm
    totally lost. I can configure the cluster with regular MultiNIC and IPMultiNIC,
    but the failover-time is a bit long. Thanx in advance.

  2. Re: IPMP and IPMultiNICB

    OK you've got 2 servers, thus you've got 2 MultiNICB resources (or 1
    resource with local settings for each server)


    This means that on each server you setup the IPMP for that server (read
    unique).

    For IPMP (forget VCS for the moment) you will need 3 IP addresses (1
    base per interface and 1 admin address). These will be unique to each
    server.


    Once the IPMP is configured, you need to tell VCS about the resource(s).

    Normally, this is done like this (I have 2 nodes -> nodeA and nodeB):


    MultiNICB mymnic (
    Device @nodeA = { hme0, qfe1 }
    Device @nodeB = { qfe1, qfe3 }
    UseMpathd = 1 // Please note that this is the tells
    // VCS that we want IPMP to take care of
    // the internal fail-over
    MpathdCommand = "/sbin/in.mpathd"
    )



    OK, so your next question would be "what about a fail-over IP address ?"



    This is where the IPMultiNICB resource comes in. The IPMultiNICB
    resource can fail over between servers.



    BIG EXAMPLE
    -----------


    OK, so I have 2 machines (nodeA and nodeB). I want to use IPMP to do
    fail-overs internally on each machine, and I want VCS to monitor this.

    On a machine (which ever is "live" at that stage), I want to run Oracle
    and Oracle Listener. For this purpose, I need to have a "floating IP"
    that clients can connect to if they want to connect to the database.

    This IP address is 10.1.1.100


    So, now, how are we going to do this ?


    On nodeA
    --------


    We need all the IP addresses in the same subnet, so we will assign
    10.1.1.1 to hme0, 10.1.1.2 to qfe1 and the admin address (to indicate
    the "live address between these 2 interfaces) will be 10.1.1.10

    #cat /etc/hostname.hme0
    10.1.1.1 netmask + broadcast + deprecated -failover up addif 10.1.1.10
    netmask + broadcast + up



    The above will plumb the admin address (10.1.1.10) on hme0 on startup



    #cat /etc/hostname.qfe1
    10.1.1.2 netmask + broadcast + deprecated -failover up




    On nodeB
    --------


    We need the same subnet (but different IP addresses) on these (qfe1 and
    qfe3) interfaces. So we will assign 10.1.1.11 to qfe1 and 10.1.1.12 to
    qfe3 and have an admin address of 10.1.1.20. Again, we will plumb the
    admin address on qfe1 on startup

    #cat /etc/hostname.qfe1
    10.1.1.11 netmask + broadcast + deprecated -failover up addif 10.1.1.20
    netmask + broadcast + up


    #cat /etc/hostname.qfe3
    10.1.1.12 netmask + broadcast + deprecated -failover up







    ------------------------
    OK - Solaris part done, now onto VCS


    I'm not going to show all the resources, but the important onces are the
    MultiNICB and the IPMultiNICB (which will have the floating IP address
    10.1.1.100) on the "live" machine)




    so, from the main.cf


    ...
    ...

    MultiNICB mymnic (
    Device @nodeA = { hme0, qfe1 }
    Device @nodeB = { qfe1, qfe3 }
    UseMpathd = 1 // Please note that this is the tells
    // VCS that we want IPMP to take care of
    // the internal fail-over
    MpathdCommand = "/sbin/in.mpathd"
    )


    IPMultiNICB Oracle_Floatingg_IP (
    BaseResName = mymnic
    Address = "10.1.1.100"
    netmask = "255.0.0.0"
    )



    ...
    ...







    And that is it !!!!

    You can start doing fancy stuff as well now. You can have the MultiNICB
    resource in 1 service group and the IPMultiNICB in another. Then you use
    a Proxy resource to point to the original MultiNICB resource (see the
    Bundled Agents Guide for such an example)





    Hope that explains it a bit. Just a side note. Once you've configured
    IPMP, please do a "ps -ef | grep mptahd" . In later versions of
    Solaris, the /sbin/in.mpathd is a link. You will need to specify the
    path of the executable (where the link points to) in the resource. This
    path will also show up in the "ps" command


    And that concludes today's lesson. Now go get teacher a beer !!!





    J. Henriksen wrote:
    > Hi,
    > Could someone give me some help regarding the following issue.
    > I'm configuring two nodes in a vcs 4.0 cluster, and I want to use MultiNICB
    > in conjunction with IPMultiNICB and IPMP (solaris). It should be really easy,
    > but I just don't get it. I know that I have to plumb and configure my NIC's
    > through hostname.*, and then configure the multinic resources in VCS. What
    > I don't understand is the following. In the hostname.* files, you need to
    > address the logical ip-address, as you would with plain IPMP without vcs-clustering.
    > That's fine until I need to configure the hostname.* files on my second server.
    > Should I use the same "failover-address" as the first server. Won't this
    > result in two servers trying to be the same ip-address. As you can see I'm
    > totally lost. I can configure the cluster with regular MultiNIC and IPMultiNIC,
    > but the failover-time is a bit long. Thanx in advance.


  3. Re: IPMP and IPMultiNICB


    Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).

    Still I have some problems with the multinicb and ipmultinicb. I've got almost
    everything working... I changed the path to the executable from /sbin/in.mpathd
    to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").

    Still I get these messages in /var/VRTSvcs/log/engine_A.log:

    2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    mpathd process (/usr/lib/inet/in.mpathd) does not exist

    Another strange thing, I've configured a test servicegroup utilizing the
    multinicB group, but when I try to activate the ipmultinic resource I get
    the messages:

    2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    mpathd process (/usr/lib/inet/in.mpathd) does not exist
    2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    -online TEST_prod_IP srvun03 from 192.168.8.150
    2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute for
    group test_SG on all nodes
    2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    IP address is configured elsewhere. Will not online

  4. Re: IPMP and IPMultiNICB


    Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    exist" problem.

    BUT, I still have problems with the
    >2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >IP address is configured elsewhere. Will not online


    I'm 100% sure this addresss is not configured elsewhere, what can I do??
    Is there any other logfile?

    Regards,
    //Mattias



    "Mattias Lundström" wrote:
    >
    >Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >
    >Still I have some problems with the multinicb and ipmultinicb. I've got

    almost
    >everything working... I changed the path to the executable from /sbin/in.mpathd
    >to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >
    >Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >
    >2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >
    >Another strange thing, I've configured a test servicegroup utilizing the
    >multinicB group, but when I try to activate the ipmultinic resource I get
    >the messages:
    >
    >2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >-online TEST_prod_IP srvun03 from 192.168.8.150
    >2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute for
    >group test_SG on all nodes
    >2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >IP address is configured elsewhere. Will not online
    >.



  5. Re: IPMP and IPMultiNICB


    Hi,
    I figured it out myself, and it's exactely how you described it below. Thanx
    a lot. What confused me was that IPMP didn't seem to work as it should, and
    then I got nowhere with VCS. One of my interfaces was configured wrong on
    the switch. After this was corrected, I struggeled a bit with VCS, the reason;
    "defaultrouter" wasn't open for ICMP probes or ping, so IPMP failed all my
    interfaces. Anyway, it's all working now, and you've provided a great how-to
    for futere reference.

    If you're in Oslo tonight, swing by the office and get that cold beer.

    Med hilsen / With regards
    Jørgen Henriksen
    Sys.admin Unix






    Me wrote:
    >OK you've got 2 servers, thus you've got 2 MultiNICB resources (or 1
    >resource with local settings for each server)
    >
    >
    >This means that on each server you setup the IPMP for that server (read


    >unique).
    >
    >For IPMP (forget VCS for the moment) you will need 3 IP addresses (1
    >base per interface and 1 admin address). These will be unique to each
    >server.
    >
    >
    >Once the IPMP is configured, you need to tell VCS about the resource(s).
    >
    >Normally, this is done like this (I have 2 nodes -> nodeA and nodeB):
    >
    >
    >MultiNICB mymnic (
    > Device @nodeA = { hme0, qfe1 }
    > Device @nodeB = { qfe1, qfe3 }
    > UseMpathd = 1 // Please note that this is the tells
    > // VCS that we want IPMP to take care of
    > // the internal fail-over
    > MpathdCommand = "/sbin/in.mpathd"
    >)
    >
    >
    >
    >OK, so your next question would be "what about a fail-over IP address ?"
    >
    >
    >
    >This is where the IPMultiNICB resource comes in. The IPMultiNICB
    >resource can fail over between servers.
    >
    >
    >
    >BIG EXAMPLE
    >-----------
    >
    >
    >OK, so I have 2 machines (nodeA and nodeB). I want to use IPMP to do
    >fail-overs internally on each machine, and I want VCS to monitor this.
    >
    >On a machine (which ever is "live" at that stage), I want to run Oracle


    >and Oracle Listener. For this purpose, I need to have a "floating IP"
    >that clients can connect to if they want to connect to the database.
    >
    >This IP address is 10.1.1.100
    >
    >
    >So, now, how are we going to do this ?
    >
    >
    >On nodeA
    >--------
    >
    >
    >We need all the IP addresses in the same subnet, so we will assign
    >10.1.1.1 to hme0, 10.1.1.2 to qfe1 and the admin address (to indicate
    >the "live address between these 2 interfaces) will be 10.1.1.10
    >
    >#cat /etc/hostname.hme0
    >10.1.1.1 netmask + broadcast + deprecated -failover up addif 10.1.1.10
    >netmask + broadcast + up
    >
    >
    >
    >The above will plumb the admin address (10.1.1.10) on hme0 on startup
    >
    >
    >
    >#cat /etc/hostname.qfe1
    >10.1.1.2 netmask + broadcast + deprecated -failover up
    >
    >
    >
    >
    >On nodeB
    >--------
    >
    >
    >We need the same subnet (but different IP addresses) on these (qfe1 and


    >qfe3) interfaces. So we will assign 10.1.1.11 to qfe1 and 10.1.1.12 to
    >qfe3 and have an admin address of 10.1.1.20. Again, we will plumb the
    >admin address on qfe1 on startup
    >
    >#cat /etc/hostname.qfe1
    >10.1.1.11 netmask + broadcast + deprecated -failover up addif 10.1.1.20


    >netmask + broadcast + up
    >
    >
    >#cat /etc/hostname.qfe3
    >10.1.1.12 netmask + broadcast + deprecated -failover up
    >
    >
    >
    >
    >
    >
    >
    >------------------------
    >OK - Solaris part done, now onto VCS
    >
    >
    >I'm not going to show all the resources, but the important onces are the


    >MultiNICB and the IPMultiNICB (which will have the floating IP address
    >10.1.1.100) on the "live" machine)
    >
    >
    >
    >
    >so, from the main.cf
    >
    >
    >...
    >...
    >
    >MultiNICB mymnic (
    > Device @nodeA = { hme0, qfe1 }
    > Device @nodeB = { qfe1, qfe3 }
    > UseMpathd = 1 // Please note that this is the tells
    > // VCS that we want IPMP to take care of
    > // the internal fail-over
    > MpathdCommand = "/sbin/in.mpathd"
    >)
    >
    >
    >IPMultiNICB Oracle_Floatingg_IP (
    > BaseResName = mymnic
    > Address = "10.1.1.100"
    > netmask = "255.0.0.0"
    >)
    >
    >
    >
    >...
    >...
    >
    >
    >
    >
    >
    >
    >
    >And that is it !!!!
    >
    >You can start doing fancy stuff as well now. You can have the MultiNICB


    >resource in 1 service group and the IPMultiNICB in another. Then you use


    >a Proxy resource to point to the original MultiNICB resource (see the
    >Bundled Agents Guide for such an example)
    >
    >
    >
    >
    >
    >Hope that explains it a bit. Just a side note. Once you've configured
    >IPMP, please do a "ps -ef | grep mptahd" . In later versions of
    >Solaris, the /sbin/in.mpathd is a link. You will need to specify the
    >path of the executable (where the link points to) in the resource. This


    >path will also show up in the "ps" command
    >
    >
    >And that concludes today's lesson. Now go get teacher a beer !!!
    >
    >
    >
    >
    >
    >J. Henriksen wrote:
    >> Hi,
    >> Could someone give me some help regarding the following issue.
    >> I'm configuring two nodes in a vcs 4.0 cluster, and I want to use MultiNICB
    >> in conjunction with IPMultiNICB and IPMP (solaris). It should be really

    easy,
    >> but I just don't get it. I know that I have to plumb and configure my

    NIC's
    >> through hostname.*, and then configure the multinic resources in VCS.

    What
    >> I don't understand is the following. In the hostname.* files, you need

    to
    >> address the logical ip-address, as you would with plain IPMP without

    vcs-clustering.
    >> That's fine until I need to configure the hostname.* files on my second

    server.
    >> Should I use the same "failover-address" as the first server. Won't this
    >> result in two servers trying to be the same ip-address. As you can see

    I'm
    >> totally lost. I can configure the cluster with regular MultiNIC and IPMultiNIC,
    >> but the failover-time is a bit long. Thanx in advance.



  6. Re: IPMP and IPMultiNICB

    All the IPMultiNICB resource does, is to find the "live" interface, and
    then plumb the IP address on it.

    If the plumb then fails (do it by hand - it is a good test), the
    resource will just tell you the reason why it could not plumb.


    Now, what I would suggest is the following:

    Online the MultiNICB resource (hares -online -sys
    )

    Then look at the config (ifconfig)

    See which interface is the "live" one (the interface with 2 IP addresses
    plumbed up)

    The "addif" the new interface (say you have hme0 and the "admin" address
    on hme0:1 --> then you add hme0:2) and plumb the IP address. See if
    this works.


    If not, see where the IP address is located on.



    One last thing to check:

    See if your IPMultiNICB resource is in a parallel Service Group (there
    will be a "Parallel=1" below the group specification in main.cf). If
    this is the case, then you're trying to lpumb the same IP address up on
    more than one machine - you will get a duplicate IP address !!


    If you get stuck, post the main.cf, "ifconfig -a" and the hostname.*
    files here and I will have a look for you

    Good luck

    Mattias Lundström wrote:
    > Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    > exist" problem.
    >
    > BUT, I still have problems with the
    >
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online

    >
    >
    > I'm 100% sure this addresss is not configured elsewhere, what can I do??
    > Is there any other logfile?
    >
    > Regards,
    > //Mattias
    >
    >
    >
    > "Mattias Lundström" wrote:
    >
    >>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>
    >>Still I have some problems with the multinicb and ipmultinicb. I've got

    >
    > almost
    >
    >>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>
    >>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>
    >>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>
    >>Another strange thing, I've configured a test servicegroup utilizing the
    >>multinicB group, but when I try to activate the ipmultinic resource I get
    >>the messages:
    >>
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute for
    >>group test_SG on all nodes
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online
    >>.

    >
    >


  7. Re: IPMP and IPMultiNICB

    Damn !!!!


    Sydney is just too far away to come and get my beer !!



    J. Henriksen wrote:
    > Hi,
    > I figured it out myself, and it's exactely how you described it below. Thanx
    > a lot. What confused me was that IPMP didn't seem to work as it should, and
    > then I got nowhere with VCS. One of my interfaces was configured wrong on
    > the switch. After this was corrected, I struggeled a bit with VCS, the reason;
    > "defaultrouter" wasn't open for ICMP probes or ping, so IPMP failed all my
    > interfaces. Anyway, it's all working now, and you've provided a great how-to
    > for futere reference.
    >
    > If you're in Oslo tonight, swing by the office and get that cold beer.
    >
    > Med hilsen / With regards
    > Jørgen Henriksen
    > Sys.admin Unix
    >
    >
    >
    >
    >
    >
    > Me wrote:
    >
    >>OK you've got 2 servers, thus you've got 2 MultiNICB resources (or 1
    >>resource with local settings for each server)
    >>
    >>
    >>This means that on each server you setup the IPMP for that server (read

    >
    >
    >>unique).
    >>
    >>For IPMP (forget VCS for the moment) you will need 3 IP addresses (1
    >>base per interface and 1 admin address). These will be unique to each
    >>server.
    >>
    >>
    >>Once the IPMP is configured, you need to tell VCS about the resource(s).
    >>
    >>Normally, this is done like this (I have 2 nodes -> nodeA and nodeB):
    >>
    >>
    >>MultiNICB mymnic (
    >> Device @nodeA = { hme0, qfe1 }
    >> Device @nodeB = { qfe1, qfe3 }
    >> UseMpathd = 1 // Please note that this is the tells
    >> // VCS that we want IPMP to take care of
    >> // the internal fail-over
    >> MpathdCommand = "/sbin/in.mpathd"
    >>)
    >>
    >>
    >>
    >>OK, so your next question would be "what about a fail-over IP address ?"
    >>
    >>
    >>
    >>This is where the IPMultiNICB resource comes in. The IPMultiNICB
    >>resource can fail over between servers.
    >>
    >>
    >>
    >>BIG EXAMPLE
    >>-----------
    >>
    >>
    >>OK, so I have 2 machines (nodeA and nodeB). I want to use IPMP to do
    >>fail-overs internally on each machine, and I want VCS to monitor this.
    >>
    >>On a machine (which ever is "live" at that stage), I want to run Oracle

    >
    >
    >>and Oracle Listener. For this purpose, I need to have a "floating IP"
    >>that clients can connect to if they want to connect to the database.
    >>
    >>This IP address is 10.1.1.100
    >>
    >>
    >>So, now, how are we going to do this ?
    >>
    >>
    >>On nodeA
    >>--------
    >>
    >>
    >>We need all the IP addresses in the same subnet, so we will assign
    >>10.1.1.1 to hme0, 10.1.1.2 to qfe1 and the admin address (to indicate
    >>the "live address between these 2 interfaces) will be 10.1.1.10
    >>
    >>#cat /etc/hostname.hme0
    >>10.1.1.1 netmask + broadcast + deprecated -failover up addif 10.1.1.10
    >>netmask + broadcast + up
    >>
    >>
    >>
    >>The above will plumb the admin address (10.1.1.10) on hme0 on startup
    >>
    >>
    >>
    >>#cat /etc/hostname.qfe1
    >>10.1.1.2 netmask + broadcast + deprecated -failover up
    >>
    >>
    >>
    >>
    >>On nodeB
    >>--------
    >>
    >>
    >>We need the same subnet (but different IP addresses) on these (qfe1 and

    >
    >
    >>qfe3) interfaces. So we will assign 10.1.1.11 to qfe1 and 10.1.1.12 to
    >>qfe3 and have an admin address of 10.1.1.20. Again, we will plumb the
    >>admin address on qfe1 on startup
    >>
    >>#cat /etc/hostname.qfe1
    >>10.1.1.11 netmask + broadcast + deprecated -failover up addif 10.1.1.20

    >
    >
    >>netmask + broadcast + up
    >>
    >>
    >>#cat /etc/hostname.qfe3
    >>10.1.1.12 netmask + broadcast + deprecated -failover up
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>------------------------
    >>OK - Solaris part done, now onto VCS
    >>
    >>
    >>I'm not going to show all the resources, but the important onces are the

    >
    >
    >>MultiNICB and the IPMultiNICB (which will have the floating IP address
    >>10.1.1.100) on the "live" machine)
    >>
    >>
    >>
    >>
    >>so, from the main.cf
    >>
    >>
    >>...
    >>...
    >>
    >>MultiNICB mymnic (
    >> Device @nodeA = { hme0, qfe1 }
    >> Device @nodeB = { qfe1, qfe3 }
    >> UseMpathd = 1 // Please note that this is the tells
    >> // VCS that we want IPMP to take care of
    >> // the internal fail-over
    >> MpathdCommand = "/sbin/in.mpathd"
    >>)
    >>
    >>
    >>IPMultiNICB Oracle_Floatingg_IP (
    >> BaseResName = mymnic
    >> Address = "10.1.1.100"
    >> netmask = "255.0.0.0"
    >>)
    >>
    >>
    >>
    >>...
    >>...
    >>
    >>
    >>
    >>
    >>
    >>
    >>
    >>And that is it !!!!
    >>
    >>You can start doing fancy stuff as well now. You can have the MultiNICB

    >
    >
    >>resource in 1 service group and the IPMultiNICB in another. Then you use

    >
    >
    >>a Proxy resource to point to the original MultiNICB resource (see the
    >>Bundled Agents Guide for such an example)
    >>
    >>
    >>
    >>
    >>
    >>Hope that explains it a bit. Just a side note. Once you've configured
    >>IPMP, please do a "ps -ef | grep mptahd" . In later versions of
    >>Solaris, the /sbin/in.mpathd is a link. You will need to specify the
    >>path of the executable (where the link points to) in the resource. This

    >
    >
    >>path will also show up in the "ps" command
    >>
    >>
    >>And that concludes today's lesson. Now go get teacher a beer !!!
    >>
    >>
    >>
    >>
    >>
    >>J. Henriksen wrote:
    >>
    >>>Hi,
    >>>Could someone give me some help regarding the following issue.
    >>>I'm configuring two nodes in a vcs 4.0 cluster, and I want to use MultiNICB
    >>>in conjunction with IPMultiNICB and IPMP (solaris). It should be really

    >
    > easy,
    >
    >>>but I just don't get it. I know that I have to plumb and configure my

    >
    > NIC's
    >
    >>>through hostname.*, and then configure the multinic resources in VCS.

    >
    > What
    >
    >>>I don't understand is the following. In the hostname.* files, you need

    >
    > to
    >
    >>>address the logical ip-address, as you would with plain IPMP without

    >
    > vcs-clustering.
    >
    >>>That's fine until I need to configure the hostname.* files on my second

    >
    > server.
    >
    >>>Should I use the same "failover-address" as the first server. Won't this
    >>>result in two servers trying to be the same ip-address. As you can see

    >
    > I'm
    >
    >>>totally lost. I can configure the cluster with regular MultiNIC and IPMultiNIC,
    >>>but the failover-time is a bit long. Thanx in advance.

    >
    >


  8. Re: IPMP and IPMultiNICB


    Hi,
    I'm facing the same problem with the "monitor:The mpathd process (/usr/lib/inet/in.mpathd)
    does not
    >exist" error.

    How did you solve it eventually?
    10x.

    "Mattias Lundström" wrote:
    >
    >Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    >exist" problem.
    >
    >BUT, I still have problems with the
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online

    >
    >I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >Is there any other logfile?
    >
    >Regards,
    >//Mattias
    >
    >
    >
    >"Mattias Lundström" wrote:
    >>
    >>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>
    >>Still I have some problems with the multinicb and ipmultinicb. I've got

    >almost
    >>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>
    >>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>
    >>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>
    >>Another strange thing, I've configured a test servicegroup utilizing the
    >>multinicB group, but when I try to activate the ipmultinic resource I get
    >>the messages:
    >>
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    for
    >>group test_SG on all nodes
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online
    >>.

    >



  9. Re: IPMP and IPMultiNICB


    I am getting exactly the same error "monitor:The mpathd process (/usr/lib/inet/in.mpathd)
    does not exist"

    How did you fix it?

    "Mattias Lundström" wrote:
    >
    >Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    >exist" problem.
    >
    >BUT, I still have problems with the
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online

    >
    >I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >Is there any other logfile?
    >
    >Regards,
    >//Mattias
    >
    >
    >
    >"Mattias Lundström" wrote:
    >>
    >>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>
    >>Still I have some problems with the multinicb and ipmultinicb. I've got

    >almost
    >>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>
    >>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>
    >>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>
    >>Another strange thing, I've configured a test servicegroup utilizing the
    >>multinicB group, but when I try to activate the ipmultinic resource I get
    >>the messages:
    >>
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    for
    >>group test_SG on all nodes
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online
    >>.

    >



  10. Re: IPMP and IPMultiNICB


    I also interest in the fix.

    I have the following errors:

    Nov 15 11:55:58 maquxs03 Had[1572]: [ID 702911 daemon.notice] VCS ERROR V-16-1-6505
    (maquxs03) MultiNICB:MNICB:monitor:The mpathd process (/sbin/in.mpathd) does
    not exist
    Nov 15 11:55:58 maquxs03 Had[1572]: [ID 702911 daemon.notice] VCS ERROR V-16-1-6507
    (maquxs03) MultiNICB:MNICB:monitor:Restart of mpathd failed

    thanks in advantage

    "Rohana" wrote:
    >
    >I am getting exactly the same error "monitor:The mpathd process (/usr/lib/inet/in.mpathd)
    >does not exist"
    >
    >How did you fix it?
    >
    >"Mattias Lundström" wrote:
    >>
    >>Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    >>exist" problem.
    >>
    >>BUT, I still have problems with the
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>IP address is configured elsewhere. Will not online

    >>
    >>I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >>Is there any other logfile?
    >>
    >>Regards,
    >>//Mattias
    >>
    >>
    >>
    >>"Mattias Lundström" wrote:
    >>>
    >>>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>>
    >>>Still I have some problems with the multinicb and ipmultinicb. I've got

    >>almost
    >>>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>>
    >>>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>>
    >>>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>
    >>>Another strange thing, I've configured a test servicegroup utilizing the
    >>>multinicB group, but when I try to activate the ipmultinic resource I

    get
    >>>the messages:
    >>>
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    >for
    >>>group test_SG on all nodes
    >>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>IP address is configured elsewhere. Will not online
    >>>.

    >>

    >



  11. Re: IPMP and IPMultiNICB


    I'm also getting this error and nowhere on this thread does anybody give a
    explanation or a fix for the problem.

    Fix please :-)

    "moshe levy" wrote:
    >
    >Hi,
    >I'm facing the same problem with the "monitor:The mpathd process (/usr/lib/inet/in.mpathd)
    >does not
    >>exist" error.

    >How did you solve it eventually?
    >10x.
    >
    >"Mattias Lundström" wrote:
    >>
    >>Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    >>exist" problem.
    >>
    >>BUT, I still have problems with the
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>IP address is configured elsewhere. Will not online

    >>
    >>I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >>Is there any other logfile?
    >>
    >>Regards,
    >>//Mattias
    >>
    >>
    >>
    >>"Mattias Lundström" wrote:
    >>>
    >>>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>>
    >>>Still I have some problems with the multinicb and ipmultinicb. I've got

    >>almost
    >>>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>>
    >>>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>>
    >>>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>
    >>>Another strange thing, I've configured a test servicegroup utilizing the
    >>>multinicB group, but when I try to activate the ipmultinic resource I

    get
    >>>the messages:
    >>>
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    >for
    >>>group test_SG on all nodes
    >>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>IP address is configured elsewhere. Will not online
    >>>.

    >>

    >



  12. Re: IPMP and IPMultiNICB


    How did you resolve the in.mpathd does not exist problem?

    "Mattias Lundström" wrote:
    >
    >Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not
    >exist" problem.
    >
    >BUT, I still have problems with the
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online

    >
    >I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >Is there any other logfile?
    >
    >Regards,
    >//Mattias
    >
    >
    >
    >"Mattias Lundström" wrote:
    >>
    >>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>
    >>Still I have some problems with the multinicb and ipmultinicb. I've got

    >almost
    >>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>
    >>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>
    >>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>
    >>Another strange thing, I've configured a test servicegroup utilizing the
    >>multinicB group, but when I try to activate the ipmultinic resource I get
    >>the messages:
    >>
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    for
    >>group test_SG on all nodes
    >>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>IP address is configured elsewhere. Will not online
    >>.

    >



  13. Re: IPMP and IPMultiNICB


    Hi there,
    have you tried this:

    MpathdCommand = "/usr/lib/inet/in.mpathd -a"

    (Unauthorative answer ;-) )

    Good luck,
    Wolli


    --
    unic001
    ------------------------------------------------------------------------
    unic001's Profile: http://forums.yourdomain.com.au/member.php?userid=106
    View this thread: http://forums.yourdomain.com.au/showthread.php?t=575


  14. Re: IPMP and IPMultiNICB


    Greetings everyone,

    I am seeing a lot of comments regarding the fix for the VCS error "in.mpathd
    does not
    exists" error. I fixed that issue by running the following command:

    hares -modify IPMP-res MpathdCommand "/usr/lib/inet/in.mpathd -a "

    I hope this helps. Thanks.


    "Jorge Martinez" wrote:
    >
    >I also interest in the fix.
    >
    >I have the following errors:
    >
    >Nov 15 11:55:58 maquxs03 Had[1572]: [ID 702911 daemon.notice] VCS ERROR

    V-16-1-6505
    >(maquxs03) MultiNICB:MNICB:monitor:The mpathd process (/sbin/in.mpathd)

    does
    >not exist
    >Nov 15 11:55:58 maquxs03 Had[1572]: [ID 702911 daemon.notice] VCS ERROR

    V-16-1-6507
    >(maquxs03) MultiNICB:MNICB:monitor:Restart of mpathd failed
    >
    >thanks in advantage
    >
    >"Rohana" wrote:
    >>
    >>I am getting exactly the same error "monitor:The mpathd process (/usr/lib/inet/in.mpathd)
    >>does not exist"
    >>
    >>How did you fix it?
    >>
    >>"Mattias Lundström" wrote:
    >>>
    >>>Solved the "monitor:The mpathd process (/usr/lib/inet/in.mpathd) does

    not
    >>>exist" problem.
    >>>
    >>>BUT, I still have problems with the
    >>>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>>IP address is configured elsewhere. Will not online
    >>>
    >>>I'm 100% sure this addresss is not configured elsewhere, what can I do??
    >>>Is there any other logfile?
    >>>
    >>>Regards,
    >>>//Mattias
    >>>
    >>>
    >>>
    >>>"Mattias Lundström" wrote:
    >>>>
    >>>>Hi, I've setup ipmp on my servers (Now running vcs 4.1 on Solaris 9).
    >>>>
    >>>>Still I have some problems with the multinicb and ipmultinicb. I've got
    >>>almost
    >>>>everything working... I changed the path to the executable from /sbin/in.mpathd
    >>>>to /usr/lib/inet/in.mpathd (also tried "/usr/lib/inet/in.mpathd -a").
    >>>>
    >>>>Still I get these messages in /var/VRTSvcs/log/engine_A.log:
    >>>>
    >>>>2005/05/26 13:16:41 VCS ERROR V-16-10001-6505 (srvun05) MultiNICBB_lan_NIC:monitor:The
    >>>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>>
    >>>>Another strange thing, I've configured a test servicegroup utilizing

    the
    >>>>multinicB group, but when I try to activate the ipmultinic resource I

    >get
    >>>>the messages:
    >>>>
    >>>>2005/05/26 13:25:30 VCS ERROR V-16-10001-6505 (srvun03) MultiNICBB_lan_NIC:monitor:The
    >>>>mpathd process (/usr/lib/inet/in.mpathd) does not exist
    >>>>2005/05/26 13:25:30 VCS INFO V-16-1-50135 User admin fired command: hares
    >>>>-online TEST_prod_IP srvun03 from 192.168.8.150
    >>>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10233 Clearing Restart attribute

    >>for
    >>>>group test_SG on all nodes
    >>>>2005/05/26 13:25:30 VCS NOTICE V-16-1-10301 Initiating Online of Resource
    >>>>TEST_prod_IP (Owner: unknown, Group: test_SG) on System srvun03
    >>>>2005/05/26 13:25:30 VCS ERROR V-16-10001-5013 (srvun03) IPMultiNICB:TEST_prod_IPnline:This
    >>>>IP address is configured elsewhere. Will not online
    >>>>.
    >>>

    >>

    >



+ Reply to Thread