Backup and Server Redundancy - Netware

This is a discussion on Backup and Server Redundancy - Netware ; Hi there!!! My company has one main Netware 5.00d server (productive with 150 Users) and 2 non-productive servers (also 5.00d). We currently do a daily, weekly, monthly and annual backup of the productive server in a disk-disk-tape backup. The productive ...

+ Reply to Thread
Results 1 to 9 of 9

Thread: Backup and Server Redundancy

  1. Backup and Server Redundancy

    Hi there!!!

    My company has one main Netware 5.00d server (productive with 150
    Users) and 2 non-productive servers (also 5.00d).

    We currently do a daily, weekly, monthly and annual backup of the
    productive server in a disk-disk-tape backup.

    The productive server is in a different tree to the others (network
    was setup many years ago, without prior netware knowledge).

    We are now at a stage where any server downtime can cause HUGE losses.
    As we have no redundancy at the moment, it is a precarious situation
    when anything goes wrong with the productive server.


    I am looking into the possibilities of server redundancy for at least
    the productive server.

    My questions are:
    1. What backup systems can be recommended for something like this?
    2. What redundancy/clustering systems can be recommended?
    3. What licensing would be required?
    4. Can we build onto the current system, or is a "new start" better?

    Any help in this would be greatly appreciated.

  2. Re: Backup and Server Redundancy

    On 13 Jan 2005 08:07:09 -0800, Billy wrote:
    > My company has one main Netware 5.00d server (productive with 150
    > Users) and 2 non-productive servers (also 5.00d).


    Yuck. Those are ancient versions, probably not even supported any more.

    > I am looking into the possibilities of server redundancy for at least
    > the productive server.
    >
    > My questions are:
    > 1. What backup systems can be recommended for something like this?
    > 2. What redundancy/clustering systems can be recommended?
    > 3. What licensing would be required?


    Upgrade. Get yourself to NetWare 6.5, and go with their Clustering. Once
    set up and configured correctly, you have a multinode cluster where
    services can move from one node to another, pretty much transparent to the
    end users, allowing for 99.9999% uninterrupted operations.

    You'll have to see your Novell salespeople for upgrade pricing and
    licensing information.

    > 4. Can we build onto the current system, or is a "new start" better?


    I'd start by using those two extra servers and building a two-node NW65
    cluster on them. Once you have that up and running and understand it, then
    start migrating your users and their data over to it.


    --
    | David Gersic dgersic_@_niu.edu |
    | ROTFL: Rave On, Tofu-Flinging Leeches |
    | Email address is munged to avoid spammers. Remove the underscores. |

  3. Re: Backup and Server Redundancy

    David Gersic wrote:

    >On 13 Jan 2005 08:07:09 -0800, Billy wrote:
    >> My company has one main Netware 5.00d server (productive with 150
    >> Users) and 2 non-productive servers (also 5.00d).

    >
    >Yuck. Those are ancient versions, probably not even supported any more.
    >
    >> I am looking into the possibilities of server redundancy for at least
    >> the productive server.
    >>
    >> My questions are:
    >> 1. What backup systems can be recommended for something like this?
    >> 2. What redundancy/clustering systems can be recommended?
    >> 3. What licensing would be required?

    >
    >Upgrade. Get yourself to NetWare 6.5, and go with their Clustering. Once
    >set up and configured correctly, you have a multinode cluster where
    >services can move from one node to another, pretty much transparent to the
    >end users, allowing for 99.9999% uninterrupted operations.
    >
    >You'll have to see your Novell salespeople for upgrade pricing and
    >licensing information.
    >
    >> 4. Can we build onto the current system, or is a "new start" better?

    >
    >I'd start by using those two extra servers and building a two-node NW65
    >cluster on them. Once you have that up and running and understand it, then
    >start migrating your users and their data over to it.


    I'm looking at the same thing - standby (mirror) server vs clustering.
    From what I (quickly) saw is that clustering uses a common storage pool -
    outside of the server box itself. Yes, No?

  4. Re: Backup and Server Redundancy

    On Thu, 13 Jan 2005 21:15:05 GMT, www.JimWilliamson.net wrote:
    > From what I (quickly) saw is that clustering uses a common storage pool -
    > outside of the server box itself. Yes, No?


    Ideally, yes. You can do it with shared SCSI (one host adapter in each
    server, connected to a chain of shared disks), iSCSI, or some kind of
    fibrechannel SAN setup. iSCSI looks to be a good way to build an
    inexpensive cluster.

    --
    | David Gersic dgersic_@_niu.edu |
    | I'm so broke I'm thinking of starting my own government... |
    | Email address is munged to avoid spammers. Remove the underscores. |

  5. Re: Backup and Server Redundancy

    David Gersic wrote:

    >On Thu, 13 Jan 2005 21:15:05 GMT, www.JimWilliamson.net wrote:
    >> From what I (quickly) saw is that clustering uses a common storage pool -
    >> outside of the server box itself. Yes, No?

    >
    >Ideally, yes. You can do it with shared SCSI (one host adapter in each
    >server, connected to a chain of shared disks), iSCSI, or some kind of
    >fibrechannel SAN setup. iSCSI looks to be a good way to build an
    >inexpensive cluster.


    Why, when desiring to have redundancy (many 9's of uptime) is an externally
    located (single) storage pool ideal? With two servers, each with a host
    adapter, running to a shared storage pool (of redundant drives) - isn't
    there at least some piece of the puzzle that is non-redundant (leading to a
    single point of failure)?

    I'd think that two servers, fully supplied to stand on their own, possibly
    located apart from each other, communicating changes to each's storage pool
    would be preferred. One box might be considered master and the other slave
    or they could load balance as equals. Is this type of setup used?

  6. Re: Backup and Server Redundancy

    On Fri, 14 Jan 2005 18:03:26 GMT, www.JimWilliamson.net wrote:
    >>Ideally, yes. You can do it with shared SCSI (one host adapter in each
    >>server, connected to a chain of shared disks), iSCSI, or some kind of
    >>fibrechannel SAN setup. iSCSI looks to be a good way to build an
    >>inexpensive cluster.

    >
    > Why, when desiring to have redundancy (many 9's of uptime) is an externally
    > located (single) storage pool ideal?


    Doing it with shared SCSI is doing it on the cheap. This could be good for
    training, proof of concept, or a VMWare setup to play with. iSCSI could be a
    connection to a virtual and internally redundant disk system. It's actually
    pretty close to what you see with SAN traffic on fibrechannel. Fibrechannel
    is SCSI commands embedded in gigabit speed networking that is almost but
    not quite IP over a fibre transport. iSCSI is SCSI commands embedded in IP
    traffic, which could be over gigabit ethernet, which could be over fibre or
    copper transport.

    On the back end, the SAN hardware is typically a dedicated RAID setup,
    with multiple physical disks connected with multiple controllers in a mesh
    framework so that there isn't a single point of failure. Usually they add
    things like hot spare disks so that even a disk failure is handled without
    you needing to know about it.

    You can accomplish similar things with an iSCSI front end, multiple disk
    RAID setup on the back end. Not quite a robust as SAN, but close to it and
    _lots_ cheaper.

    For Novell's clustering software, there are multiple communications paths
    between the cluster nodes. They use IP to talk to each other and keep track
    of which nodes are currently up or down. That's a potential point of
    failure, if the network goes away each node thinks that all the other nodes
    have died. So they also use the shared storage as a second communications
    channel. Even with no network, all the nodes can see (via the disk) that
    the other nodes are alive, so they won't attempt to deal with failures
    that haven't actually happened.

    > With two servers, each with a host
    > adapter, running to a shared storage pool (of redundant drives) - isn't
    > there at least some piece of the puzzle that is non-redundant (leading to a
    > single point of failure)?


    Have a look at some of the SAN stuff. XIOTech, EMC, and IBM all have quite
    a lot of information available. If you have fully meshed out your SAN, any
    one (or even more than one) failure in the disk subsystem will be dealt
    with so that you experience no down time. With clustering on top of that,
    software failures also will be dealt with so that the users can't tell
    that anything has gone wrong.

    > I'd think that two servers, fully supplied to stand on their own, possibly
    > located apart from each other, communicating changes to each's storage pool
    > would be preferred.


    Vinca's "standby server" stuff does this. I don't know what it's called
    these days. This has some advantages, being somewhat cheaper than some SAN
    deployments and being "off site", but is significantly slower since it has
    to first notice that there's a problem, then it changes the server name and
    boots the server so that the "standby" one can come up in place of the
    normal one.

    With SAN, you could at least in theory have two disk cabinets, one local,
    one remote, and run a mirror between them so that you'd off-site your data.
    There are some distance limitations you'd have to investigate.


    --
    | David Gersic dgersic_@_niu.edu |
    | Don't be a SAP and wear 'em too tight, if they RIP you'll go "ARP!" |
    | Email address is munged to avoid spammers. Remove the underscores. |

  7. Re: Backup and Server Redundancy

    David Gersic wrote:

    >On Fri, 14 Jan 2005 18:03:26 GMT, www.JimWilliamson.net wrote:
    >>>Ideally, yes. You can do it with shared SCSI (one host adapter in each
    >>>server, connected to a chain of shared disks), iSCSI, or some kind of
    >>>fibrechannel SAN setup. iSCSI looks to be a good way to build an
    >>>inexpensive cluster.

    >>
    >> Why, when desiring to have redundancy (many 9's of uptime) is an externally
    >> located (single) storage pool ideal?

    >
    >Doing it with shared SCSI is doing it on the cheap. This could be good for
    >training, proof of concept, or a VMWare setup to play with. iSCSI could be a
    >connection to a virtual and internally redundant disk system. It's actually
    >pretty close to what you see with SAN traffic on fibrechannel. Fibrechannel
    >is SCSI commands embedded in gigabit speed networking that is almost but
    >not quite IP over a fibre transport. iSCSI is SCSI commands embedded in IP
    >traffic, which could be over gigabit ethernet, which could be over fibre or
    >copper transport.
    >
    >On the back end, the SAN hardware is typically a dedicated RAID setup,
    >with multiple physical disks connected with multiple controllers in a mesh
    >framework so that there isn't a single point of failure. Usually they add
    >things like hot spare disks so that even a disk failure is handled without
    >you needing to know about it.
    >
    >You can accomplish similar things with an iSCSI front end, multiple disk
    >RAID setup on the back end. Not quite a robust as SAN, but close to it and
    >_lots_ cheaper.
    >
    >For Novell's clustering software, there are multiple communications paths
    >between the cluster nodes. They use IP to talk to each other and keep track
    >of which nodes are currently up or down. That's a potential point of
    >failure, if the network goes away each node thinks that all the other nodes
    >have died. So they also use the shared storage as a second communications
    >channel. Even with no network, all the nodes can see (via the disk) that
    >the other nodes are alive, so they won't attempt to deal with failures
    >that haven't actually happened.
    >
    >> With two servers, each with a host
    >> adapter, running to a shared storage pool (of redundant drives) - isn't
    >> there at least some piece of the puzzle that is non-redundant (leading to a
    >> single point of failure)?

    >
    >Have a look at some of the SAN stuff. XIOTech, EMC, and IBM all have quite
    >a lot of information available. If you have fully meshed out your SAN, any
    >one (or even more than one) failure in the disk subsystem will be dealt
    >with so that you experience no down time. With clustering on top of that,
    >software failures also will be dealt with so that the users can't tell
    >that anything has gone wrong.
    >
    >> I'd think that two servers, fully supplied to stand on their own, possibly
    >> located apart from each other, communicating changes to each's storage pool
    >> would be preferred.

    >
    >Vinca's "standby server" stuff does this. I don't know what it's called
    >these days. This has some advantages, being somewhat cheaper than some SAN
    >deployments and being "off site", but is significantly slower since it has
    >to first notice that there's a problem, then it changes the server name and
    >boots the server so that the "standby" one can come up in place of the
    >normal one.
    >
    >With SAN, you could at least in theory have two disk cabinets, one local,
    >one remote, and run a mirror between them so that you'd off-site your data.
    >There are some distance limitations you'd have to investigate.


    Interesting stuff - thanks David.

    If/when something changes for me in this arena I'll report back.

  8. Re: Backup and Server Redundancy

    If you're feeling flush, why not push the boat out and get 2 SANS, then
    you can set up NSS mirroring. That way you have multiple cluster nodes and
    multiple mirrored storage devices - although most storage boxes have a lot of
    redundancy built into them, that's one certain way to eliminate any single
    point of failure.


    Alex Toft
    Networked Systems Consultant
    Leeds Met University
    Cold & Rainy England



  9. Re: Backup and Server Redundancy

    Alex Toft wrote:

    >If you're feeling flush, why not push the boat out and get 2 SANS, then
    >you can set up NSS mirroring. That way you have multiple cluster nodes and
    >multiple mirrored storage devices - although most storage boxes have a lot of
    >redundancy built into them, that's one certain way to eliminate any single
    >point of failure.


    Champagne tastes on a beer budget - LOL

    It certainly is good to here some alternatives tho.

+ Reply to Thread