fibre channel tape drives accessed from multiple clusters - VMS

This is a discussion on fibre channel tape drives accessed from multiple clusters - VMS ; I'll have to admit that I haven't kept up with vms SAN management too much. There is no way I can afford the devices, switches, and PCI cards, and it really doesn't come up too often with tapesys, because a ...

+ Reply to Thread
Results 1 to 14 of 14

Thread: fibre channel tape drives accessed from multiple clusters

  1. fibre channel tape drives accessed from multiple clusters

    I'll have to admit that I haven't kept up with vms SAN management too
    much. There is no way I can afford the devices, switches, and PCI
    cards, and it really doesn't come up too often with tapesys, because a
    fibre channel tape drive looks pretty much like a hard-connected one.

    For the purpose of this discussion, I care nothing about fibre channel
    disks, but about fibre channel tape drives, especially in fibre
    channel jukeboxes, i.e. the juke robot is a fibre channel device too.

    As long as the devices are connected to only one cluster, I can
    basically ignore the SAN, as I mentioned earlier. Within the cluster,
    the device allocate takes care of all locking issues. And for the
    robot, the JB (Jukebox Manager) remote robot capability, together with
    the tapesys database, allows synchronized access even across clusters
    and standalone nodes, as long as all access is via tapesys/JB and
    nobody issues a MRU robot command directly.

    The problem is the tape drives. The lack of a SAN-wide locking
    mechanism results in crosstalk, with multiple nodes trying to access
    the same drives at the same time. My understanding is that most
    customers get around this problem by assigning specific drives to
    specific clusters, allowing the allocate command to work correctly.
    However, I currently have a customer who insists on all drives in all
    jukeboxes being available on all clusters.

    Now both JB and tapesys have central server processes that can serve
    an entire network, rather than just the cluster, so I could implement
    a lock function in one of those, and that would give me locking for
    the entire SAN. The problem is that it only controls tapesys/JB and
    nothing prevents some bozo from allocating a tape drive from the
    command line on a different cluster and manipulating it. Until the
    master lock is implemented, tapesys on another cluster can do it,
    which is the problem I am diagnosing now. I have always depended on
    the allocate system service, but in this situation it doesn't work.

    A node in one cluster allocates the drive then uses it for a backup.
    A node in another cluster allocates the same drive and also starts a
    backup. Both backups then get fatal drive errors and positioning
    errors and "volume not software enabled".

    I really consider this a failing of the SAN/fibrechannel protocol for
    not providing a locking mechanism, or of VMS for not using it if there
    is one. But in any case, the customer wants me to fix it. Before I
    go to the effort of adding my own inter-cluster tape drive lock
    mechanism, is there an easier way to do it, with just straight vms?

  2. Re: fibre channel tape drives accessed from multiple clusters

    On May 25, 4:29*pm, wayne.sew...@gmail.com wrote:
    > I'll have to admit that I haven't kept up with vms SAN management too
    > much. *There is no way I can afford the devices, switches, and PCI
    > cards


    Then you haven't been paying very much attention to eBay. :-)

    > I really consider this a failing of the SAN/fibrechannel protocol for
    > not providing a locking mechanism, or of VMS for not using it if there
    > is one. *But in any case, the customer wants me to fix it. *Before I
    > go to the effort of adding my own inter-cluster tape drive lock
    > mechanism, is there an easier way to do it, with just straight vms?


    I won't try to point blame as long as you don't blame VMS. It's been
    no secret that concurrent access to any device from two or more
    systems doesn't work unless they are in a common cluster with a shared
    lock manager.

    Similarly, from what I know (and I don't claim it's very much) fibre
    has always required multiple hosts to communicate amongst themselves
    for sharing access to a single device (disk or tape).

    Maybe your client was misled about what "shared" fibre could
    accomplish.

  3. Re: fibre channel tape drives accessed from multiple clusters

    On Sun, May 25, 2008 at 4:29 PM, wrote:
    > I'll have to admit that I haven't kept up with vms SAN management too
    > much. There is no way I can afford the devices, switches, and PCI
    > cards, and it really doesn't come up too often with tapesys, because a
    > fibre channel tape drive looks pretty much like a hard-connected one.
    >
    > For the purpose of this discussion, I care nothing about fibre channel
    > disks, but about fibre channel tape drives, especially in fibre
    > channel jukeboxes, i.e. the juke robot is a fibre channel device too.
    >
    > As long as the devices are connected to only one cluster, I can
    > basically ignore the SAN, as I mentioned earlier. Within the cluster,
    > the device allocate takes care of all locking issues. And for the
    > robot, the JB (Jukebox Manager) remote robot capability, together with
    > the tapesys database, allows synchronized access even across clusters
    > and standalone nodes, as long as all access is via tapesys/JB and
    > nobody issues a MRU robot command directly.
    >
    > The problem is the tape drives. The lack of a SAN-wide locking
    > mechanism results in crosstalk, with multiple nodes trying to access
    > the same drives at the same time. My understanding is that most
    > customers get around this problem by assigning specific drives to
    > specific clusters, allowing the allocate command to work correctly.
    > However, I currently have a customer who insists on all drives in all
    > jukeboxes being available on all clusters.
    >
    > Now both JB and tapesys have central server processes that can serve
    > an entire network, rather than just the cluster, so I could implement
    > a lock function in one of those, and that would give me locking for
    > the entire SAN. The problem is that it only controls tapesys/JB and
    > nothing prevents some bozo from allocating a tape drive from the
    > command line on a different cluster and manipulating it. Until the
    > master lock is implemented, tapesys on another cluster can do it,
    > which is the problem I am diagnosing now. I have always depended on
    > the allocate system service, but in this situation it doesn't work.
    >
    > A node in one cluster allocates the drive then uses it for a backup.
    > A node in another cluster allocates the same drive and also starts a
    > backup. Both backups then get fatal drive errors and positioning
    > errors and "volume not software enabled".
    >
    > I really consider this a failing of the SAN/fibrechannel protocol for
    > not providing a locking mechanism, or of VMS for not using it if there
    > is one. But in any case, the customer wants me to fix it. Before I
    > go to the effort of adding my own inter-cluster tape drive lock
    > mechanism, is there an easier way to do it, with just straight vms?
    >

    There used to be a device called a Modular Data Router that enabled
    you to make a SCSI-device SAN-aware.

    Don't know if HP still sells them, though.

    WWWebb

  4. Re: fibre channel tape drives accessed from multiple clusters

    On May 25, 5:03 pm, "William Webb" wrote:
    > On Sun, May 25, 2008 at 4:29 PM, wrote:
    > > I'll have to admit that I haven't kept up with vms SAN management too
    > > much. There is no way I can afford the devices, switches, and PCI
    > > cards, and it really doesn't come up too often with tapesys, because a
    > > fibre channel tape drive looks pretty much like a hard-connected one.

    >
    > > For the purpose of this discussion, I care nothing about fibre channel
    > > disks, but about fibre channel tape drives, especially in fibre
    > > channel jukeboxes, i.e. the juke robot is a fibre channel device too.

    >
    > > As long as the devices are connected to only one cluster, I can
    > > basically ignore the SAN, as I mentioned earlier. Within the cluster,
    > > the device allocate takes care of all locking issues. And for the
    > > robot, the JB (Jukebox Manager) remote robot capability, together with
    > > the tapesys database, allows synchronized access even across clusters
    > > and standalone nodes, as long as all access is via tapesys/JB and
    > > nobody issues a MRU robot command directly.

    >
    > > The problem is the tape drives. The lack of a SAN-wide locking
    > > mechanism results in crosstalk, with multiple nodes trying to access
    > > the same drives at the same time. My understanding is that most
    > > customers get around this problem by assigning specific drives to
    > > specific clusters, allowing the allocate command to work correctly.
    > > However, I currently have a customer who insists on all drives in all
    > > jukeboxes being available on all clusters.

    >
    > > Now both JB and tapesys have central server processes that can serve
    > > an entire network, rather than just the cluster, so I could implement
    > > a lock function in one of those, and that would give me locking for
    > > the entire SAN. The problem is that it only controls tapesys/JB and
    > > nothing prevents some bozo from allocating a tape drive from the
    > > command line on a different cluster and manipulating it. Until the
    > > master lock is implemented, tapesys on another cluster can do it,
    > > which is the problem I am diagnosing now. I have always depended on
    > > the allocate system service, but in this situation it doesn't work.

    >
    > > A node in one cluster allocates the drive then uses it for a backup.
    > > A node in another cluster allocates the same drive and also starts a
    > > backup. Both backups then get fatal drive errors and positioning
    > > errors and "volume not software enabled".

    >
    > > I really consider this a failing of the SAN/fibrechannel protocol for
    > > not providing a locking mechanism, or of VMS for not using it if there
    > > is one. But in any case, the customer wants me to fix it. Before I
    > > go to the effort of adding my own inter-cluster tape drive lock
    > > mechanism, is there an easier way to do it, with just straight vms?

    >
    > There used to be a device called a Modular Data Router that enabled
    > you to make a SCSI-device SAN-aware.
    >
    > Don't know if HP still sells them, though.
    >
    > WWWebb


    I am not trying to set up a san. I am trying to deal with a
    customer's existing san.

  5. Re: fibre channel tape drives accessed from multiple clusters

    The FC protocol moves data between devices. That's all it's designed to do.

    The way you set the infrastructure up determines which systems can see which
    devices. That's what "zoning" in the FC switches does (it's analogous to
    VLANs in a data network). In a SAN infrastructure where the zoning allows
    all HBAs to see all tape devices then you will have to implement some form
    of access control to ensure synchronised and serialised access to the
    available pool of tape drives.

    Do beware of having Windows boxes capable of seeing the tape and robot
    devices as well as your VMS systems - the Windows removable storage service
    and the device drivers have a nasty habit of probing the tape and robot
    devices every so often, which can play havoc with operations in progress.
    It's easy enough to fix - just disable the relevant services and device
    drivers in the Windows boxes, or set up the zoning so that the Windows boxes
    cannot see the tape devices.

    If you choose to have all your systems see all your storage devices or tape
    devices then you have to deal with the consequences by ensuring that you
    don't have multiple systems attempting to access the same device at the same
    time. It's not a problem that's unique to tapes. Disc storage arrays
    implement "device presentation" that describe which HBA WWIDs (systems) can
    access which available storage units (see the EVA Vdisk presentations or
    look at selective presentation in HSGs). Tapes don't implement device
    presentation as far as I'm aware - and while it might be a useful concept
    (presentations being a little easier to work with than SAN zoning) you'd
    still have to arbitrate access to the tapes if more than one system can see
    the tapes at any given instant in time.

    In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    communications to arbitrate access to the tape devices by having one (or
    more for redundancy) ABS servers allocate tape devices to client systems,
    the ABS servers manage the tapes and moving them around using the robot,
    then the ABS clients perform the backup functions directly to the tapes,
    then the ABS servers take care of the tape moving again.

    Alternatively you should be able to write something pretty simple to
    synchronise access to the tape devices and robots, then used BACKUP as and
    when you need to. All you need to do is keep track of the tapes, robots and
    backup jobs across the separate clusters.

    You could also control tape access by using SAN zoning in the SAN switches,
    thus restricting access from a single cluster to a single known set of
    tapes, however you'd have to change the SAN zoning if you needed to make
    those tapes available to other nodes / clusters.

    However, why bother doing all that when you can buy the ABS server and
    client components? It's what I've done for the big systems I've been
    designing and building recently. ABS has been around for a good few years
    now. It works pretty well on the whole.

    It's also worth reading the SAN design reference guide to understand SAN
    zoning and a few other things. See here:
    http://h20000.www2.hp.com/bizsupport...eriesId=406734

    It's not that difficult once you understand what's going on and why it all
    works as it does, then figure out what mechanisms you need to use to achieve
    what's needed.

    --
    Cheers, Colin.
    Legacy = Stuff that works properly!



  6. Re: fibre channel tape drives accessed from multiple clusters

    In article , "Colin Butcher"
    writes:

    >However, why bother doing all that when you can buy the ABS server and
    >client components? It's what I've done for the big systems I've been
    >designing and building recently. ABS has been around for a good few years
    >now. It works pretty well on the whole.


    I think the reason Wayne can't do that is that he's supporting a third-party
    competitor to ABS.

    -- Alan

  7. Re: fibre channel tape drives accessed from multiple clusters

    On May 26, 6:26 am, "Colin Butcher"
    wrote:
    > The FC protocol moves data between devices. That's all it's designed to do.
    >
    > The way you set the infrastructure up determines which systems can see which
    > devices. That's what "zoning" in the FC switches does (it's analogous to
    > VLANs in a data network). In a SAN infrastructure where the zoning allows
    > all HBAs to see all tape devices then you will have to implement some form
    > of access control to ensure synchronised and serialised access to the
    > available pool of tape drives.
    >
    > Do beware of having Windows boxes capable of seeing the tape and robot
    > devices as well as your VMS systems - the Windows removable storage service
    > and the device drivers have a nasty habit of probing the tape and robot
    > devices every so often, which can play havoc with operations in progress.
    > It's easy enough to fix - just disable the relevant services and device
    > drivers in the Windows boxes, or set up the zoning so that the Windows boxes
    > cannot see the tape devices.
    >


    Typical billybox behavior.

    But there's also the possibility that any one of those foreign
    systems, billy or otherwise, could deliberately intend to use a tape
    drive, which would amount to the same thing.

    > If you choose to have all your systems see all your storage devices or tape
    > devices then you have to deal with the consequences by ensuring that you
    > don't have multiple systems attempting to access the same device at the same
    > time. It's not a problem that's unique to tapes. Disc storage arrays
    > implement "device presentation" that describe which HBA WWIDs (systems) can
    > access which available storage units (see the EVA Vdisk presentations or
    > look at selective presentation in HSGs). Tapes don't implement device
    > presentation as far as I'm aware - and while it might be a useful concept
    > (presentations being a little easier to work with than SAN zoning) you'd
    > still have to arbitrate access to the tapes if more than one system can see
    > the tapes at any given instant in time.
    >
    > In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    > communications to arbitrate access to the tape devices by having one (or
    > more for redundancy) ABS servers allocate tape devices to client systems,
    > the ABS servers manage the tapes and moving them around using the robot,
    > then the ABS clients perform the backup functions directly to the tapes,
    > then the ABS servers take care of the tape moving again.
    >


    Tapesys/JB can perform these functions as well, with only the cross-
    cluster locking of the tape drives missing. I just wasn't aware that
    the vms allocate command didn't work in this situation. Like a child
    of nature, I just assumed that device locking was handled by the fibre
    channel protocol and vms. I never dreamed that a vms environment
    would allow such chaos. This type of thing is for the billyworld, not
    vms.

    > Alternatively you should be able to write something pretty simple to
    > synchronise access to the tape devices and robots, then used BACKUP as and
    > when you need to.


    Yes, that is what I was planning. The reason for the post is to see
    if something is already out there that does this (just the locking,
    have no interest in abs).

    >All you need to do is keep track of the tapes, robots and
    > backup jobs across the separate clusters.
    >


    Got that part.

    > You could also control tape access by using SAN zoning in the SAN switches,
    > thus restricting access from a single cluster to a single known set of
    > tapes, however you'd have to change the SAN zoning if you needed to make
    > those tapes available to other nodes / clusters.
    >


    Yes, I am aware of that, and I assume most customers are using
    zoning. In all the years that people have been using fibre channel,
    this is the first complaint about crosstalk I have had from a
    customer.

    > However, why bother doing all that when you can buy the ABS server and
    > client components? It's what I've done for the big systems I've been
    > designing and building recently. ABS has been around for a good few years
    > now. It works pretty well on the whole.
    >


    I am not likely to do that, since I maintain a competing
    product. :-) Now that I understand the situation, it is easy enough
    to add such locking to tapesys and JB.

    > It's also worth reading the SAN design reference guide to understand SAN
    > zoning and a few other things. See here:http://h20000.www2.hp.com/bizsupport...ntIndex.jsp?co...
    >
    > It's not that difficult once you understand what's going on and why it all
    > works as it does, then figure out what mechanisms you need to use to achieve
    > what's needed.
    >



  8. Re: fibre channel tape drives accessed from multiple clusters

    I'm sure you'll figure the locking out. Tapesys / JB needs it, just as ABS /
    MDMS and other similar products needs in in a shared-device access
    configuration such as that provided by fibrechannel.

    The same problem can exist with discs too - you have to design the storage
    presentations so that only members of the same cluster can see the disc
    devices they need, then cluster-wide access is handled in the usual manner.
    If you present devices to all clusters then it's easy enough to cross-mount
    a disc from another node or cluster and thus screw up the file system on the
    target disc.

    The base FC protocols are "stateless", which is why synchronisation and
    serialisation of access to devices has to be done elsewhere. It would be
    nice to have an extension to the tape drive capability to make it "stateful"
    such that only one HBA port in a machine could access the tape drive at any
    one time, with a means of locking-out access for other HBA ports while the
    tape device is genuinely in use by another HBA. Maybe there's room for a FC
    DAM (FC Device Access Method) or FC TAP (FC Tape Access Protocol) layered on
    top of the basic FC transport to handle the tape allocate, read/write and
    deallocate sequence? You also have to handle the devices or nodes "going
    away" or otherwise failing part way through the sequence of operations while
    those nodes are using the target tape device.

    It's not just a billybox problem, you have to cater for all operating
    systems that have access to the visible SAN devices and which will probe the
    SAN devices / LUNs on boot or driver load / reset etc. to determine their
    capabilities.

    Handling arbitration of FC device access control over an "atomic" sequence
    of operations is a tricky little problem and would require low-level things
    such as device driver changes to handle the "someone else has this drive in
    use, please try later" errors.

    It's going to be fun to test it. Good luck.

    --
    Cheers, Colin.
    Legacy = Stuff that works properly!



  9. Re: fibre channel tape drives accessed from multiple clusters

    Colin Butcher wrote:
    > The FC protocol moves data between devices. That's all it's designed to do.
    >
    > The way you set the infrastructure up determines which systems can see which
    > devices. That's what "zoning" in the FC switches does (it's analogous to
    > VLANs in a data network). In a SAN infrastructure where the zoning allows
    > all HBAs to see all tape devices then you will have to implement some form
    > of access control to ensure synchronised and serialised access to the
    > available pool of tape drives.
    >
    > Do beware of having Windows boxes capable of seeing the tape and robot
    > devices as well as your VMS systems - the Windows removable storage service
    > and the device drivers have a nasty habit of probing the tape and robot
    > devices every so often, which can play havoc with operations in progress.
    > It's easy enough to fix - just disable the relevant services and device
    > drivers in the Windows boxes, or set up the zoning so that the Windows boxes
    > cannot see the tape devices.
    >
    > If you choose to have all your systems see all your storage devices or tape
    > devices then you have to deal with the consequences by ensuring that you
    > don't have multiple systems attempting to access the same device at the same
    > time. It's not a problem that's unique to tapes. Disc storage arrays
    > implement "device presentation" that describe which HBA WWIDs (systems) can
    > access which available storage units (see the EVA Vdisk presentations or
    > look at selective presentation in HSGs). Tapes don't implement device
    > presentation as far as I'm aware - and while it might be a useful concept
    > (presentations being a little easier to work with than SAN zoning) you'd
    > still have to arbitrate access to the tapes if more than one system can see
    > the tapes at any given instant in time.
    >
    > In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    > communications to arbitrate access to the tape devices by having one (or
    > more for redundancy) ABS servers allocate tape devices to client systems,
    > the ABS servers manage the tapes and moving them around using the robot,
    > then the ABS clients perform the backup functions directly to the tapes,
    > then the ABS servers take care of the tape moving again.
    >
    > Alternatively you should be able to write something pretty simple to
    > synchronise access to the tape devices and robots, then used BACKUP as and
    > when you need to. All you need to do is keep track of the tapes, robots and
    > backup jobs across the separate clusters.
    >
    > You could also control tape access by using SAN zoning in the SAN switches,
    > thus restricting access from a single cluster to a single known set of
    > tapes, however you'd have to change the SAN zoning if you needed to make
    > those tapes available to other nodes / clusters.
    >
    > However, why bother doing all that when you can buy the ABS server and
    > client components? It's what I've done for the big systems I've been
    > designing and building recently. ABS has been around for a good few years
    > now. It works pretty well on the whole.
    >
    > It's also worth reading the SAN design reference guide to understand SAN
    > zoning and a few other things. See here:
    > http://h20000.www2.hp.com/bizsupport...eriesId=406734
    >
    > It's not that difficult once you understand what's going on and why it all
    > works as it does, then figure out what mechanisms you need to use to achieve
    > what's needed.
    >


    With modern tape libraries you can do also do selective presentation of
    the tape drives (and the robotic). That will help somewhat. Then there
    is also a possibility to use different zoning for different needs. E.g.
    when you want to backup your VMS cluster you use one zone configuration
    and when you are done with that you change the zoning configuration to
    another one which will give access to the tape library for a winbloze
    cluster and so on. That kind of a zoning reconfiguration can be
    automated. There aren't many other means of synchronizing tape drive
    access in a heterogenous SAN on the SAN or OS level. Backup applications
    like ABS can keep track of tape drive access on the application level.



  10. Re: fibre channel tape drives accessed from multiple clusters

    wayne.sewell@gmail.com wrote:
    > [snip]
    > I am not likely to do that, since I maintain a competing
    > product. :-) Now that I understand the situation, it is easy enough
    > to add such locking to tapesys and JB.


    SLS is, as you know, descended from an early TapeSys.

    One thing SLS does wrong is it does not poll DCSC drives through DCSC
    before attempting to assign them to a process issuing a STORAGE ALLOCATE
    command.

    That said, the idea of a central server with disrtibuted clients is
    likely what may be needed here. The server would mete out drives and
    media to the clients, while the clients would send data directly to the
    drives.

    The server would, of necessity, need to support non-VMS clients.

    D.J.D.

  11. Re: fibre channel tape drives accessed from multiple clusters

    On May 26, 3:41 pm, Kari Uusimäki
    wrote:
    > Colin Butcher wrote:
    > > The FC protocol moves data between devices. That's all it's designed to do.

    >
    > > The way you set the infrastructure up determines which systems can see which
    > > devices. That's what "zoning" in the FC switches does (it's analogous to
    > > VLANs in a data network). In a SAN infrastructure where the zoning allows
    > > all HBAs to see all tape devices then you will have to implement some form
    > > of access control to ensure synchronised and serialised access to the
    > > available pool of tape drives.

    >
    > > Do beware of having Windows boxes capable of seeing the tape and robot
    > > devices as well as your VMS systems - the Windows removable storage service
    > > and the device drivers have a nasty habit of probing the tape and robot
    > > devices every so often, which can play havoc with operations in progress..
    > > It's easy enough to fix - just disable the relevant services and device
    > > drivers in the Windows boxes, or set up the zoning so that the Windows boxes
    > > cannot see the tape devices.

    >
    > > If you choose to have all your systems see all your storage devices or tape
    > > devices then you have to deal with the consequences by ensuring that you
    > > don't have multiple systems attempting to access the same device at the same
    > > time. It's not a problem that's unique to tapes. Disc storage arrays
    > > implement "device presentation" that describe which HBA WWIDs (systems) can
    > > access which available storage units (see the EVA Vdisk presentations or
    > > look at selective presentation in HSGs). Tapes don't implement device
    > > presentation as far as I'm aware - and while it might be a useful concept
    > > (presentations being a little easier to work with than SAN zoning) you'd
    > > still have to arbitrate access to the tapes if more than one system can see
    > > the tapes at any given instant in time.

    >
    > > In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    > > communications to arbitrate access to the tape devices by having one (or
    > > more for redundancy) ABS servers allocate tape devices to client systems,
    > > the ABS servers manage the tapes and moving them around using the robot,
    > > then the ABS clients perform the backup functions directly to the tapes,
    > > then the ABS servers take care of the tape moving again.

    >
    > > Alternatively you should be able to write something pretty simple to
    > > synchronise access to the tape devices and robots, then used BACKUP as and
    > > when you need to. All you need to do is keep track of the tapes, robots and
    > > backup jobs across the separate clusters.

    >
    > > You could also control tape access by using SAN zoning in the SAN switches,
    > > thus restricting access from a single cluster to a single known set of
    > > tapes, however you'd have to change the SAN zoning if you needed to make
    > > those tapes available to other nodes / clusters.

    >
    > > However, why bother doing all that when you can buy the ABS server and
    > > client components? It's what I've done for the big systems I've been
    > > designing and building recently. ABS has been around for a good few years
    > > now. It works pretty well on the whole.

    >
    > > It's also worth reading the SAN design reference guide to understand SAN
    > > zoning and a few other things. See here:
    > >http://h20000.www2.hp.com/bizsupport...ntIndex.jsp?co...

    >
    > > It's not that difficult once you understand what's going on and why it all
    > > works as it does, then figure out what mechanisms you need to use to achieve
    > > what's needed.

    >
    > With modern tape libraries you can do also do selective presentation of
    > the tape drives (and the robotic). That will help somewhat. Then there
    > is also a possibility to use different zoning for different needs. E.g.
    > when you want to backup your VMS cluster you use one zone configuration
    > and when you are done with that you change the zoning configuration to
    > another one which will give access to the tape library for a winbloze
    > cluster and so on. That kind of a zoning reconfiguration can be
    > automated. There aren't many other means of synchronizing tape drive
    > access in a heterogenous SAN on the SAN or OS level. Backup applications
    > like ABS can keep track of tape drive access on the application level.


    Yes, I understand about the zoning and reconfiguration and such. This
    particular customer refuses to do that, wants all jukeboxes available
    to all vmsclusters simultaneously. He doesn't have any foreign
    systems in the SAN. Basically I will need to add the tape drive
    locking to tapesys.

  12. Re: fibre channel tape drives accessed from multiple clusters

    On May 27, 7:24 pm, David J Dachtera
    wrote:
    > wayne.sew...@gmail.com wrote:
    > > [snip]


    >
    > That said, the idea of a central server with disrtibuted clients is
    > likely what may be needed here. The server would mete out drives and
    > media to the clients, while the clients would send data directly to the
    > drives.
    >


    Yes, that is my plan. Fortunately, tapesys already has such a server,
    so it would be a case of adding a network-wide lock function for
    drives. Media are already under control.

    > The server would, of necessity, need to support non-VMS clients.



    Eventually, but for the first cut it will be vms-only, because this
    particular customer has only vmsclusters connected to the SAN. Once I
    have him happy, I can expand to deal with foreign system types.

  13. Re: fibre channel tape drives accessed from multiple clusters

    wayne.sewell@gmail.com explained :
    > On May 26, 3:41 pm, Kari Uusimäki
    > wrote:
    >> Colin Butcher wrote:
    >>> The FC protocol moves data between devices. That's all it's designed to do.
    >>> The way you set the infrastructure up determines which systems can see
    >>> which devices. That's what "zoning" in the FC switches does (it's analogous
    >>> to VLANs in a data network). In a SAN infrastructure where the zoning
    >>> allows all HBAs to see all tape devices then you will have to implement
    >>> some form of access control to ensure synchronised and serialised access to
    >>> the available pool of tape drives.

    >>
    >>> Do beware of having Windows boxes capable of seeing the tape and robot
    >>> devices as well as your VMS systems - the Windows removable storage service
    >>> and the device drivers have a nasty habit of probing the tape and robot
    >>> devices every so often, which can play havoc with operations in progress.
    >>> It's easy enough to fix - just disable the relevant services and device
    >>> drivers in the Windows boxes, or set up the zoning so that the Windows
    >>> boxes cannot see the tape devices.

    >>
    >>> If you choose to have all your systems see all your storage devices or tape
    >>> devices then you have to deal with the consequences by ensuring that you
    >>> don't have multiple systems attempting to access the same device at the
    >>> same time. It's not a problem that's unique to tapes. Disc storage arrays
    >>> implement "device presentation" that describe which HBA WWIDs (systems) can
    >>> access which available storage units (see the EVA Vdisk presentations or
    >>> look at selective presentation in HSGs). Tapes don't implement device
    >>> presentation as far as I'm aware - and while it might be a useful concept
    >>> (presentations being a little easier to work with than SAN zoning) you'd
    >>> still have to arbitrate access to the tapes if more than one system can see
    >>> the tapes at any given instant in time.
    >>> In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    >>> communications to arbitrate access to the tape devices by having one (or
    >>> more for redundancy) ABS servers allocate tape devices to client systems,
    >>> the ABS servers manage the tapes and moving them around using the robot,
    >>> then the ABS clients perform the backup functions directly to the tapes,
    >>> then the ABS servers take care of the tape moving again.
    >>> Alternatively you should be able to write something pretty simple to
    >>> synchronise access to the tape devices and robots, then used BACKUP as and
    >>> when you need to. All you need to do is keep track of the tapes, robots and
    >>> backup jobs across the separate clusters.
    >>> You could also control tape access by using SAN zoning in the SAN switches,
    >>> thus restricting access from a single cluster to a single known set of
    >>> tapes, however you'd have to change the SAN zoning if you needed to make
    >>> those tapes available to other nodes / clusters.
    >>> However, why bother doing all that when you can buy the ABS server and
    >>> client components? It's what I've done for the big systems I've been
    >>> designing and building recently. ABS has been around for a good few years
    >>> now. It works pretty well on the whole.
    >>> It's also worth reading the SAN design reference guide to understand SAN
    >>> zoning and a few other things. See here:
    >>> http://h20000.www2.hp.com/bizsupport...ntIndex.jsp?co...
    >>> It's not that difficult once you understand what's going on and why it all
    >>> works as it does, then figure out what mechanisms you need to use to
    >>> achieve what's needed.

    >>
    >> With modern tape libraries you can do also do selective presentation of
    >> the tape drives (and the robotic). That will help somewhat. Then there
    >> is also a possibility to use different zoning for different needs. E.g.
    >> when you want to backup your VMS cluster you use one zone configuration
    >> and when you are done with that you change the zoning configuration to
    >> another one which will give access to the tape library for a winbloze
    >> cluster and so on. That kind of a zoning reconfiguration can be
    >> automated. There aren't many other means of synchronizing tape drive
    >> access in a heterogenous SAN on the SAN or OS level. Backup applications
    >> like ABS can keep track of tape drive access on the application level.

    >
    > Yes, I understand about the zoning and reconfiguration and such. This
    > particular customer refuses to do that, wants all jukeboxes available
    > to all vmsclusters simultaneously. He doesn't have any foreign
    > systems in the SAN. Basically I will need to add the tape drive
    > locking to tapesys.


    Advance warning for die-hard technologists : I'm going to simplify
    a lot ...

    You must know that Fibre Channel is, to some effect, the implementation
    of the SCSI protocol on a single wire.

    In the SCSI protocol, there is a pair of commands "reserve/release"
    that can be used to restrict the usage of a device by a particular
    host. When a "reserve" command has been sent to a device by a host,
    every other host trying to allocate that device will receive a
    "device is busy" error. Until the host sends a "release" command.

    We have implemented those two commands into two very simple programs,
    and have inserted calls to those programs at strategic points of our
    DCL backup jobs (SLS/DCSC). All our tape drives (Storagetek 9840 & 9940
    models) are on a switch, visible from all our OpenVMS hosts, and we
    never had any allocate collision since then (almost a year ago). With
    such a delay without encountering problems, I suppose we can assume
    that it is safe...

    Unfortunately, that trick is not going to serve us very long, since
    SLS is not ported to Integrity... Anyone over here having any kind
    of experience with the openVMS netbackup client ?

    --
    Marc Van Dyck



  14. Re: fibre channel tape drives accessed from multiple clusters

    On May 29, 5:14 pm, Marc Van Dyck wrote:
    > wayne.sew...@gmail.com explained :
    >
    >
    >
    > > On May 26, 3:41 pm, Kari Uusimäki
    > > wrote:
    > >> Colin Butcher wrote:
    > >>> The FC protocol moves data between devices. That's all it's designed to do.
    > >>> The way you set the infrastructure up determines which systems can see
    > >>> which devices. That's what "zoning" in the FC switches does (it's analogous
    > >>> to VLANs in a data network). In a SAN infrastructure where the zoning
    > >>> allows all HBAs to see all tape devices then you will have to implement
    > >>> some form of access control to ensure synchronised and serialised access to
    > >>> the available pool of tape drives.

    >
    > >>> Do beware of having Windows boxes capable of seeing the tape and robot
    > >>> devices as well as your VMS systems - the Windows removable storage service
    > >>> and the device drivers have a nasty habit of probing the tape and robot
    > >>> devices every so often, which can play havoc with operations in progress.
    > >>> It's easy enough to fix - just disable the relevant services and device
    > >>> drivers in the Windows boxes, or set up the zoning so that the Windows
    > >>> boxes cannot see the tape devices.

    >
    > >>> If you choose to have all your systems see all your storage devices ortape
    > >>> devices then you have to deal with the consequences by ensuring that you
    > >>> don't have multiple systems attempting to access the same device at the
    > >>> same time. It's not a problem that's unique to tapes. Disc storage arrays
    > >>> implement "device presentation" that describe which HBA WWIDs (systems) can
    > >>> access which available storage units (see the EVA Vdisk presentations or
    > >>> look at selective presentation in HSGs). Tapes don't implement device
    > >>> presentation as far as I'm aware - and while it might be a useful concept
    > >>> (presentations being a little easier to work with than SAN zoning) you'd
    > >>> still have to arbitrate access to the tapes if more than one system can see
    > >>> the tapes at any given instant in time.
    > >>> In a VMS environment that's what ABS/MDMS does for you. ABS uses network
    > >>> communications to arbitrate access to the tape devices by having one (or
    > >>> more for redundancy) ABS servers allocate tape devices to client systems,
    > >>> the ABS servers manage the tapes and moving them around using the robot,
    > >>> then the ABS clients perform the backup functions directly to the tapes,
    > >>> then the ABS servers take care of the tape moving again.
    > >>> Alternatively you should be able to write something pretty simple to
    > >>> synchronise access to the tape devices and robots, then used BACKUP asand
    > >>> when you need to. All you need to do is keep track of the tapes, robots and
    > >>> backup jobs across the separate clusters.
    > >>> You could also control tape access by using SAN zoning in the SAN switches,
    > >>> thus restricting access from a single cluster to a single known set of
    > >>> tapes, however you'd have to change the SAN zoning if you needed to make
    > >>> those tapes available to other nodes / clusters.
    > >>> However, why bother doing all that when you can buy the ABS server and
    > >>> client components? It's what I've done for the big systems I've been
    > >>> designing and building recently. ABS has been around for a good few years
    > >>> now. It works pretty well on the whole.
    > >>> It's also worth reading the SAN design reference guide to understand SAN
    > >>> zoning and a few other things. See here:
    > >>>http://h20000.www2.hp.com/bizsupport...ntIndex.jsp?co....
    > >>> It's not that difficult once you understand what's going on and why itall
    > >>> works as it does, then figure out what mechanisms you need to use to
    > >>> achieve what's needed.

    >
    > >> With modern tape libraries you can do also do selective presentation of
    > >> the tape drives (and the robotic). That will help somewhat. Then there
    > >> is also a possibility to use different zoning for different needs. E.g.
    > >> when you want to backup your VMS cluster you use one zone configuration
    > >> and when you are done with that you change the zoning configuration to
    > >> another one which will give access to the tape library for a winbloze
    > >> cluster and so on. That kind of a zoning reconfiguration can be
    > >> automated. There aren't many other means of synchronizing tape drive
    > >> access in a heterogenous SAN on the SAN or OS level. Backup applications
    > >> like ABS can keep track of tape drive access on the application level.

    >
    > > Yes, I understand about the zoning and reconfiguration and such. This
    > > particular customer refuses to do that, wants all jukeboxes available
    > > to all vmsclusters simultaneously. He doesn't have any foreign
    > > systems in the SAN. Basically I will need to add the tape drive
    > > locking to tapesys.

    >
    > Advance warning for die-hard technologists : I'm going to simplify
    > a lot ...
    >
    > You must know that Fibre Channel is, to some effect, the implementation
    > of the SCSI protocol on a single wire.
    >
    > In the SCSI protocol, there is a pair of commands "reserve/release"
    > that can be used to restrict the usage of a device by a particular
    > host. When a "reserve" command has been sent to a device by a host,
    > every other host trying to allocate that device will receive a
    > "device is busy" error. Until the host sends a "release" command.
    >
    > We have implemented those two commands into two very simple programs,
    > and have inserted calls to those programs at strategic points of our
    > DCL backup jobs (SLS/DCSC). All our tape drives (Storagetek 9840 & 9940
    > models) are on a switch, visible from all our OpenVMS hosts, and we
    > never had any allocate collision since then (almost a year ago). With
    > such a delay without encountering problems, I suppose we can assume
    > that it is safe...
    >
    > Unfortunately, that trick is not going to serve us very long, since
    > SLS is not ported to Integrity...


    No problem. Tapesys has been running on integrity for a couple of
    years. Several former SLS customers are running on it now. TS
    includes a database conversion from SLS.

    I will have the TAPE OPER LOCK command running in a few days,
    allowing locking of the device across clusters from within tapesys and
    from the dcl command line. Also, you should be able to use your
    "reserve/release" programs with tapesys as well.

+ Reply to Thread