File locations



cluster 3.0/3.1

cluster 3.2

man pages

/usr/cluster/man

/usr/cluster/man

log files

/var/cluster/logs
/var/adm/messages


/var/cluster/logs
/var/adm/messages


sccheck logs

/var/cluster/sccheck/report.

/var/cluster/sccheck/report.

cluster check logs

N/a

/var/cluster/logs/cluster_check// (U2)

CCR files

/etc/cluster/ccr

/etc/cluster/ccr/

Cluster infrastructure file

/etc/cluster/ccr/infrastructure

/etc/cluster/ccr//infrastructure (U2)





SCSI Reservations



cluster 3.0/3.1

cluster 3.2

Display reservation keys

scsi2:
/usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/d4s2


scsi3:
/usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/d4s2


scsi2:
/usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/d4s2


scsi3:
/usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/d4s2


determine the device owner

scsi2:
/usr/cluster/lib/sc/pgre -c pgre_inresv -d /dev/did/rdsk/d4s2


scsi3:
/usr/cluster/lib/sc/scsi -c inresv -d /dev/did/rdsk/d4s2


scsi2:
/usr/cluster/lib/sc/pgre -c pgre_inresv -d /dev/did/rdsk/d4s2


scsi3:
/usr/cluster/lib/sc/scsi -c inresv -d /dev/did/rdsk/d4s2



Cluster information



cluster 3.0/3.1

cluster 3.2

Quorum info

scstat -q

clquorum show

Cluster components

scstat -pv

cluster show

Resource/Resource group status

scstat -g

clrg show
clrs show


IP Networking Multipathing

scstat -i



Status of all nodes

scstat -n

clnode show

Disk device groups

scstat -D

cldg show

Transport info

scstat -W

clintr show

Detailed resource/resource group

scrgadm -pv

clrs show -v
clrg show -v


Cluster configuration info

scconf -p

cluster show -v

Installation info (prints packages and version)

scinstall -pv

scinstall -pv


Cluster Configuration



cluster 3.0/3.1

cluster 3.2

Integrity check

sccheck

cluster check (U2)

Configure the cluster (add nodes, add data services, etc)


scinstall



scinstall


Cluster configuration utility (quorum, data sevices, resource groups, etc)

scsetup

clsetup

Add a node

scconf -a -T node=



Remove a node

scconf -r -T node=



Prevent new nodes from entering

scconf -a -T node=.



Put a node into maintenance state

scconf -c -q node=,maintstate

Note: use the scstat -q command to verify that the node is in maintenance mode, the vote count should be zero for that node.

clnode evacuate

Note: use the clquorum status command to verify that the node is in maintenance mode, the vote count should be zero for that node.

Get a node out of maintenance state

scconf -c -q node=,reset

Note: use the scstat -q command to verify that the node is in maintenance mode, the vote count should be one for that node.

clquorum reset Note: use the clquorum status command to verify that the node is in maintenance mode, the vote count should be one for that node.


Admin Quorum Device

Quorum devices are nodes, disk devices, and quorum servers. so the total quorum will be all nodes and devices added together.




cluster 3.0/3.1

cluster 3.2

Adding a device to the quorum

scconf -a -q globaldev=d11

Note: if you get the error message "unable to scrub device" use scgdevs to add device to the global device namespace.

clquorum add deviceid

Note: if you get the error message "unable to scrub device" use cldevice to add device to the global device namespace.

Removing a device to the quorum

scconf -r -q globaldev=d11

clquorum remove deviceid

Remove the last quorum device

Evacuate all nodes

put cluster into maint mode
#scconf -c -q installmode

remove the quorum device
#scconf -r -q globaldev=d11

check the quorum devices
#scstat-q


cluster set -p installmode=enabled

clquorum remove

check the quorum devices

clquorum show

Resetting quorum info

scconf -c -q reset

Note: this will bring all offline quorum devices online

clquorum reset

Note: this will bring all offline quorum devices online

Bring a quorum device into maintenance mode

obtain the device number
#scdidadm -L
#scconf -c -q globaldev=,maintstate


clquorum disable deviceid

Bring a quorum device out of maintenance mode

scconf -c -q globaldev=,reset

clquorum enable deviceid


Device Configuration



cluster 3.0/3.1

cluster 3.2

Lists all the configured devices including paths across all nodes.

scdidadm -L

cldevice list -v

List all the configured devices including paths on node only.

scdidadm -l

cldevice list -v -n

Reconfigure the device database, creating new instances numbers if required.

scdidadm -r

scdidadm -r

Lists all the configured devices including paths & fencing

N/A

cldevice show -v

Rename a did instance

N/A

cldevice rename -d

Clearing no longer used did

scdidadm -C

cldevice clear

Perform the repair procedure for a particular path (use then when a disk gets replaced)

scdidadm -R - device
scdidadm ?R 2 - device id


cldevice repair device

Configure the global device namespace

scgdevs

cldevice populate

Status of all disk paths

scdpm -p all:all

Note: (:)


cldevice status

Monitor device path

scdpm ?m

cldevice monitor -n

Unmonitor device path

scdpm ?u

cldevice unmonitor -n


Device group



cluster 3.0/3.1

cluster 3.2

Adding/Registering

scconf -a -D type=vxvm,name=appdg,nodelist=:,preferenced=true

cldg create -t -n -d

Removing

scconf -r -D name=

cldg remove-node[-t -n

cldg remove-device -d

adding single node

scconf -a -D type=vxvm,name=appdg,nodelist=

cldg add-node -t -n

Removing single node

scconf -r -D name=,nodelist=

cldg remove-node -t -n

Switch

scswitch -z -D -h

cldg switch -t -n

Put into maintenance mode

scswitch -m -D

cldg disable -t

take out of maintenance mode

scswitch -z -D -h

cldg enable -t

onlining a device group

scswitch -z -D -h

cldg online -t -n

offlining a device group

scswitch -F -D

cldg offline -t

Resync a device group

scconf -c -D name=appdg,sync

cldg sync -t


Transport cable



cluster 3.0/3.1

cluster 3.2

Enable

scconf ?c ?m endpoint=:qfe1,state=enabled

clintr enable :

Disable

scconf ?c ?m endpoint=:qfe1,state=disabled

Note: it gets deleted


clintr disable :


Resource Groups

Adding

scrgadm -a -g -h ,

clrg create -n ,

Removing

scrgadm -r -g

clrg delete

changing properties

scrgadm -c -g -y

clrg set -p

Listing

scstat -g

clrg show

Detailed List

scrgadm -pv -g

Clrg show -v

Display mode type (failover or scalable)

scrgadm -pv -g | grep 'Res Group mode'

Clrg show -v

Offlining

scswitch -F -g

clrg offline

Onlining

scswitch -Z -g

clrg online

Unmanaging

scswitch -u -g

Note: (all resources in group must be disabled)

clrg unmanage

Note: (all resources in group must be disabled)

Managing

scswitch -o -g

clrg manage

Switching

scswitch -z -g -h

clrg switch -n -h


Resources



cluster 3.0/3.1

cluster 3.2

Adding failover network resource

scrgadm ?a ?L ?g -l

clreslogicalhostname create -g Adding shared network resource

scrgadm ?a ?S ?g -l

clressharedaddress create -g

adding a failover apache application
and attaching the network resource


scrgadm ?a ?j apache_res -g \
-t SUNW.apache -y Network_resources_used =
-y Scalable=False ?y Port_list = 80/tcp \
-x Bin_dir = /usr/apache/bin





adding a shared apache application
and attaching the network resource


scrgadm ?a ?j apache_res -g \
-t SUNW.apache -y Network_resources_used =
-y Scalable=True ?y Port_list = 80/tcp \
-x Bin_dir = /usr/apache/bin





Create a HAStoragePlus failover resource

scrgadm -a -g -j -t SUNW.HAStoragePlus \
-x FileSystemMountPoints=/oracle/data01 -x Affinityon=true


clresource create -g -t SUNW.HAStoragePlus \ -p FilesystemMountPoints=/test2 -x Affinityon=true Removing

scrgadm ?r ?j

Note: must disable the resource first

clresource delete

Note: must disable the resource first

changing properties

scrgadm -c -j -y

clresource set -p

List

scstat -g

clresource list

Detailed List

scrgadm ?pv ?j
scrgadm ?pvv ?j


clresource list -v

Disable resoure monitor

scrgadm ?n ?M ?j

clresource unmonitor

Enable resource monitor

scrgadm ?e ?M ?j

clresource monitor

Disabling

scswitch ?n ?j

clresource disable

Enabling

scswitch ?e ?j

clresource enable

Clearing a failed resource

scswitch -c -h, -j -f STOP_FAILED

clrs clear -f STOP_FAILED

Find the network of a resource

scrgadm ?pvv ?j | grep ?I network

scrgadm ?pvv ?j | grep ?I network

Removing a resource and resource group

offline the group: scswitch ?F ?g

remove the resource: scrgadm ?r ?j

remove the resource group :scrgadm ?r ?g


clrg offline

clrs delete

clrg delete


Resource Types



cluster 3.0/3.1

cluster 3.2

Adding

scrgadm ?a ?t i.e SUNW.HAStoragePlus

clrt register

Deleting

scrgadm ?r ?t

set the RT_SYSTEM property on the RT to false clrt unregister

set the RT_SYSTEM property on the RT to false Listing

scrgadm ?pv | grep ?Res Type name?

clrt list




More...