What I am seeing is as follows:
node A fails over to node B without problem.
When whatever caused the failover whether error or maint. is corrected and
the groups are switched back to Node A the mount resource fails because it
has not been released from node B.
Here's the catch, if you "ls" or "bdf" ect. you do not see the mount point.
But you can still execute a "umount /mountpoint" and it works and the resource
then properly fails over and is brought up on node A.
Why does it work fine on node A and catches on node B? Any thoughts on a
fix?
VRTSagtfw 3.5P1.32 VERITAS Cluster Server VProAgent Framework
VRTSfspro 3.5-ga08 VERITAS File System Management Services
Provider
VRTSgab 3.5 VERITAS Group Membership and Atomic
Broadcast
VRTSllt 3.5 VERITAS Low Latency Transport
VRTSnfsmt 3.5P1.55 VERITAS VCS VPro NFSMount Agent
VRTSob 3.0.2.261a VERITAS Enterprise Administrator Service
VRTSobgui 3.0.2.261a VERITAS Enterprise Administrator
VRTSperl 3.5 Veritas Perl for VRTSvcs
VRTSsap 4.0.17 VERITAS VCS Agent for SAP
VRTSvcs 3.5 Veritas Cluster Server
VRTSvcsag 3.5 Veritas Cluster Server Bundled Agents
VRTSvcsdc 3.5 Veritas Cluster Server Documentation
VRTSvcsmg 3.5 VERITAS Cluster Server Message Catalogs
VRTSvcsmn 3.5 VERITAS Cluster Server Manual Pages
VRTSvcsor 3.5 Veritas Cluster Server Oracle Extension
VRTSvcsw 3.5 Veritas Cluster Manager (Web Console)
VRTSvlic 3.00.007e VERITAS License Utilities
VRTSvmdoc 3.5m VERITAS Volume Manager Documentation
VRTSvmpro 3.5m VERITAS Volume Manager Management
Services Provider
VRTSvxfs 3.5-ga15 VERITAS File System with CFS Support
VRTSvxvm 3.5m Base VERITAS Volume Manager 3.5 for
HP-UX