Quantcast
Channel: Symantec Connect - Storage and Clustering - Discussions
Viewing all 543 articles
Browse latest View live

Need to know the root cause for filesystem failure in SFS NAS

$
0
0
I need a solution

detected an issue with SFS NAS cluster and found that filesystems a cannot be confirmed on df, and general users cannot login to the node.

 

 

SFS event log:->

 

84441) 2013 Jun 19 17:42:39 kyornas051_01 sfsfs_event.network.alert: Node kyornas051_02 went offline.
84442) 2013 Jun 19 17:45:01 kyornas051_01 sfsfs_event.storage.alert.master: Slave node(s) successfully fenced out from cluster.
84443) 2013 Jun 19 18:19:49 kyornas051_01 sfsfs_event.network.info: Node kyornas051_01 joined the cluster.
84444) 2013 Jun 19 18:19:49 192.168.13.159 sfsfs_event.network.info: Node kyornas051_01 joined the cluster.
84445) 2013 Jun 19 18:21:49 kyornas051_01 sfsfs_event.network.info.master: cluster show currentload successful
84446) 2013 Jun 19 18:22:36 kyornas051_01 sfsfs_event.network.info.master: ip addr show successful
84447) 2013 Jun 19 18:23:40 kyornas051_01 sfsfs_event.network.info.master: services show successful
84448) 2013 Jun 19 18:24:20 kyornas051_01 sfsfs_event.network.alert: Interface bond0 is down on system kyornas051_02.
84449) 2013 Jun 19 18:25:09 kyornas051_01 sfsfs_event.network.info.master: IO Fencing Status: Enabled with SCSI3 Persistent Reservations
84450) 2013 Jun 19 18:25:15 kyornas051_01 sfsfs_event.network.info.master: Checked status of IO Fencing on the coordinator disks
84451) 2013 Jun 19 18:25:22 kyornas051_01 sfsfs_event.network.alert: Interface bond0 is up on system kyornas051_02.
84452) 2013 Jun 19 18:25:26 kyornas051_02 sfsfs_event.network.info: Node kyornas051_02 joined the cluster.
84453) 2013 Jun 19 18:27:01 kyornas051_01 sfsfs_event.network.info.master: cluster show currentload successful
84454) 2013 Jun 19 18:27:21 kyornas051_01 sfsfs_event.network.info.master: cluster show currentload successful
84455) 2013 Jun 19 18:27:56 kyornas051_01 sfsfs_event.network.info.master: cluster show currentload successful
84456) 2013 Jun 19 18:35:05 kyornas051_01 sfsfs_event.network.info.master: cluster show currentload successful

==================================================================

 

message file

 

 

message 51_01

2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 1 (priveth1) node 1 in trouble
2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 0 (priveth0) node 1 in trouble
2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 8 sec (1698350566)
2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 8 sec (1698356198)
2013 Jun 19 17:42:27 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 9 sec (1698350566)

then it took sometime for llt to expire

013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 1 (priveth1) node 1 expired
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 0 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 15 sec (1698356198)
2013 Jun 19 17:42:34 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 0 (priveth0) node 1 expired
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port h gen  1132317 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port v gen  113231a membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port w gen  113231c membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port a gen  1132305 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port b gen  1132314 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port f gen  113231e membership 0
2013 Jun 19 17:42:38 kyornas051_01 Had[30829]: VCS INFO V-16-1-10077 Received new cluster membership
2013 Jun 19 17:42:38 kyornas051_01 kernel: VXFEN INFO V-11-1-68 Completed ejection of leaving node(s) from data disks.
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-7899 CVM_VOLD_CHANGE command received
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-13170 Preempting CM NID 1
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-0 Calling join complete
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-8062 master: not a cluster startup
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-10994 join completed for node 0
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-4123 cluster established successfully
2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10079 System kyornas051_02 (Node \'1\') is in Down State - Membership: 0x1
2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10322 System kyornas051_02 (Node \'1\') changed state from RUNNING to FAULTED
2013 Jun 19 17:42:39 kyornas051_01 sfsfs_event.network.alert: Node kyornas051_02 went offline.

2013 Jun 19 17:42:41 kyornas051_01 kernel: vxfs: msgcnt 617 Phase 0 - /dev/vx/dsk/sfsdg/_nlm_ - Blocking buffer reads for recovery. gencnt 1 primary 0 leavers: 0x2 0x0 0x0 0x0
2013 Jun 19 17:42:41 kyornas051_01 kernel:
2013 Jun 19 17:42:41 kyornas051_01 kernel: vxfs: msgcnt 618 Phase 0 - /dev/vx/dsk/sfsdg/oss2a-segment1 - Blocking buffer reads for recovery. gencnt 1 primary 0 leavers: 0x2 0x0 0x0 0x0
2013 Jun 19 17:42:41 kyornas051_01 kernel:
2013 Jun 19 17:42:41 kyornas051_01 kernel: vxfs: msgcnt 619 Phase 0 - /dev/vx/dsk/sfsdg/oss2a-sgwcg - Blocking buffer reads for recovery. gencnt 1 primary 0 leavers: 0x2 0x0 0x0 0x0

2013 Jun 19 17:42:43 kyornas051_01 kernel: vxfs: msgcnt 695 - /dev/vx/dsk/sfsdg/_nlm_ - PNOLT bitmap in vrt: 0x3 0x0 0x0 0x0, Log replay for per node logs: 0x1 0x0 0x0 0x0
013 Jun 19 17:42:43 kyornas051_01 kernel: vxfs: msgcnt 724 Phase 2 - /dev/vx/dsk/sfsdg/_nlm_ - Buffer reads allowed
2013 Jun 19 17:42:43 kyornas051_01 kernel: vxfs: msgcnt 726 Phase 9 - /dev/vx/dsk/sfsdg/_nlm_ - Set Primary nodeid to 0
2013 Jun 19 17:42:43 kyornas051_01 kernel:
2013 Jun 19 17:42:43 kyornas051_01 kernel: vxfs: msgcnt 727 Phase 10 -/dev/vx/dsk/sfsdg/_nlm_ - Processing extended operations.
2013 Jun 19 17:42:43 kyornas051_01 kernel:

2013 Jun 19 17:43:06 kyornas051_01 kernel: nfsd: last server has exited
2013 Jun 19 17:43:06 kyornas051_01 kernel: nfsd: unexporting all filesystems
2013 Jun 19 17:43:06 kyornas051_01 rpc.mountd: Caught signal 15, un-registering and exiting.
2013 Jun 19 17:43:07 kyornas051_01 AgentFramework[30925]: VCS ERROR V-16-1-13067 Thread(4146072480) Agent is calling clean for resource(nasgw_nfs) because the resource became OFFLINE unexpectedly, on its own.
2013 Jun 19 17:43:07 kyornas051_01 Had[30829]: VCS ERROR V-16-1-13067 (kyornas051_01) Agent is calling clean for resource(nasgw_nfs) because the resource became OFFLINE unexpectedly, on its own.
2013 Jun 19 17:43:08 kyornas051_01 kernel: nfsd: failed to unregister export cache
2013 Jun 19 17:43:08 kyornas051_01 kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
2013 Jun 19 17:43:08 kyornas051_01 kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
2013 Jun 19 17:43:08 kyornas051_01 kernel: NFSD: starting 90-second grace period
2013 Jun 19 17:43:08 kyornas051_01 logger: event_notify.sh will insert a syslog marker now!
2013 Jun 19 17:43:08 kyornas051_01 sfsfs_event.marker.info: =============== Event Log Marker ===============
2013 Jun 19 17:43:14 kyornas051_01 kernel: svc: unknown version (4)
2013 Jun 19 17:43:15 kyornas051_01 logger: 2013 Jun 19 17:42:39 [kyornas051_01,alert,sfs] Node kyornas051_02 went offline.
2013 Jun 19 17:43:37 kyornas051_01 kernel: svc: unknown version (4)
2013 Jun 19 17:43:59 kyornas051_01 kernel: svc: unknown version (4)
2013 Jun 19 17:44:08 kyornas051_01 AgentFramework[30925]: VCS ERROR V-16-1-13006 Thread(4143971232) Resource(nasgw_nfs): clean procedure did not complete within the expected time.
2013 Jun 19 17:44:08 kyornas051_01 Had[30829]: VCS ERROR V-16-1-13006 (kyornas051_01) Resource(nasgw_nfs): clean procedure did not complete within the expected time.
2013 Jun 19 17:44:22 kyornas051_01 kernel: svc: unknown version (4)
2013 Jun 19 17:45:01 kyornas051_01 /usr/sbin/cron[25755]: (root) CMD (LOCKDISABLED=LOCKDISABLED /opt/VRTSnasgw/scripts/rrdtool.pl update >/dev/null 2>&1)
2013 Jun 19 17:45:01 kyornas051_01 /usr/sbin/cron[25773]: (root) CMD (/opt/VRTSnasgw/scripts/sav_event_notify.sh addevents >/dev/null 2>/dev/null)
2013 Jun 19 17:45:01 kyornas051_01 sfsfs_event.storage.alert.master: Slave node(s) successfully fenced out from cluster.
 

=============================================================================================

 

engine log

 

2013/06/19 17:42:33 VCS WARNING V-16-1-11155 LLT heartbeat link status changed. Previous status = 0x3; Current status = 0x1.
2013/06/19 17:42:38 VCS INFO V-16-1-10077 Received new cluster membership
2013/06/19 17:42:38 VCS NOTICE V-16-1-10112 System (kyornas051_01) - Membership: 0x1, DDNA: 0x0
2013/06/19 17:42:38 VCS NOTICE V-16-1-10034 RECONFIG received. VCS waiting for I/O fencing to be completed
2013/06/19 17:42:39 VCS NOTICE V-16-1-10036 I/O fencing completed
2013/06/19 17:42:39 VCS ERROR V-16-1-10079 System kyornas051_02 (Node '1') is in Down State - Membership: 0x1
2013/06/19 17:42:39 VCS ERROR V-16-1-10322 System kyornas051_02 (Node '1') changed state from RUNNING to FAULTED
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group NFS is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group Phantomgroup_priveth0 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup1 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup10 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup11 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup12 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup2 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group VIPgroup3 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount1 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount2 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount3 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount4 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount5 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount6 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount7 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount8 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount9 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount10 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount11 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount12 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount13 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount14 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount15 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount16 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount17 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount18 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount19 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount20 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount21 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount22 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount23 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount24 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount25 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount26 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount27 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount28 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount29 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group CanHostConsole is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group CanHostNLM is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group Phantomgroup_bond0 is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group cvm is offline on system kyornas051_02
2013/06/19 17:42:39 VCS NOTICE V-16-1-10446 Group ReconfigGroup is offline on system kyornas051_02
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group NFS
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group NFS is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group NFS
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group Phantomgroup_priveth0
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group Phantomgroup_priveth0 is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group Phantomgroup_priveth0
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup1
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup1
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup10
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup10
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup11
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup11
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup12
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup12
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup2
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup2
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group VIPgroup3
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group VIPgroup3
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group CanHostConsole
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group CanHostConsole is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group CanHostConsole
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group CanHostNLM
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group CanHostNLM is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group CanHostNLM
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group Phantomgroup_bond0
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group Phantomgroup_bond0 is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group Phantomgroup_bond0
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_01 as potential target node for group ReconfigGroup
2013/06/19 17:42:39 VCS INFO V-16-1-50010 Group ReconfigGroup is online or faulted on system kyornas051_01
2013/06/19 17:42:39 VCS INFO V-16-1-10493 Evaluating kyornas051_02 as potential target node for group ReconfigGroup
2013/06/19 17:42:39 VCS INFO V-16-1-10494 System kyornas051_02 not in RUNNING state
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:39 VCS INFO V-16-6-15025 (kyornas051_01) hatrigger:invoking nfs_preonline
2013/06/19 17:42:39 VCS INFO V-16-6-15076 (kyornas051_01) hatrigger:invoking regular preonline trigger if it exists
2013/06/19 17:42:40 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup10  from localhost
2013/06/19 17:42:40 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup11  from localhost
2013/06/19 17:42:41 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup3  from localhost
2013/06/19 17:42:42 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup1  from localhost
2013/06/19 17:42:42 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup12  from localhost
2013/06/19 17:42:42 VCS INFO V-16-1-50135 User root fired command: hagrp -clear CIFSgroup2  from localhost
2013/06/19 17:42:44 VCS WARNING V-16-1-11155 LLT heartbeat link status changed. Previous status = 0x1; Current status = 0x0.
2013/06/19 17:42:44 VCS INFO V-16-1-50135 User root fired command: hagrp -online CanHostConsole  kyornas051_01  from localhost
2013/06/19 17:42:44 VCS INFO V-16-1-50135 User root fired command: hagrp -online CanHostNLM  kyornas051_01  from localhost
2013/06/19 17:42:44 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/sysoffline kyornas051_02 FAULTED   successfully
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup11  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup11 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP11 of group VIPgroup11 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup11 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup11 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP11 (Owner: unknown, Group: VIPgroup11) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup10  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup10 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP10 of group VIPgroup10 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup10 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup10 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP10 (Owner: unknown, Group: VIPgroup10) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup12  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup12 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP12 of group VIPgroup12 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup12 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup12 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP12 (Owner: unknown, Group: VIPgroup12) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup11 MANUAL  successfully
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup1  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup1 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP1 of group VIPgroup1 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup1 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup1 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP1 (Owner: unknown, Group: VIPgroup1) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup2  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup2 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP2 of group VIPgroup2 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup2 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup2 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP2 (Owner: unknown, Group: VIPgroup2) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-1-50135 User root fired command: hagrp -online VIPgroup3  kyornas051_01  from localhost
2013/06/19 17:42:53 VCS NOTICE V-16-1-10166 Initiating manual online of group VIPgroup3 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10460 Clearing start attribute for resource VIP3 of group VIPgroup3 on node kyornas051_02
2013/06/19 17:42:53 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group VIPgroup3 on all nodes
2013/06/19 17:42:53 VCS NOTICE V-16-1-10187 Received -nopre online command for group VIPgroup3 on system kyornas051_01
2013/06/19 17:42:53 VCS NOTICE V-16-1-10301 Initiating Online of Resource VIP3 (Owner: unknown, Group: VIPgroup3) on System kyornas051_01
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup10 MANUAL  successfully
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup12 MANUAL  successfully
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup1 MANUAL  successfully
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup2 MANUAL  successfully
2013/06/19 17:42:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/preonline kyornas051_01 VIPgroup3 MANUAL  successfully
2013/06/19 17:42:59 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount28  Primary  kyornas051_01  from localhost
2013/06/19 17:43:01 VCS INFO V-16-1-10298 Resource VIP1 (Owner: unknown, Group: VIPgroup1) is online on kyornas051_01 (VCS initiated)
2013/06/19 17:43:01 VCS NOTICE V-16-1-10447 Group VIPgroup1 is online on system kyornas051_01
2013/06/19 17:43:05 VCS INFO V-16-1-10298 Resource VIP12 (Owner: unknown, Group: VIPgroup12) is online on kyornas051_01 (VCS initiated)
2013/06/19 17:43:05 VCS NOTICE V-16-1-10447 Group VIPgroup12 is online on system kyornas051_01
2013/06/19 17:43:06 VCS INFO V-16-1-50135 User root fired command: hares -clear VIP1  from localhost
2013/06/19 17:43:06 VCS INFO V-16-1-50135 User root fired command: MSG_RES_PROBE nasgw_nfs  kyornas051_01  from localhost
2013/06/19 17:43:06 VCS WARNING V-16-10031-7017 (kyornas051_01) NFS:nasgw_nfs:monitor:nfsd filesystem not mounted, returning offline
2013/06/19 17:43:07 VCS ERROR V-16-2-13067 (kyornas051_01) Agent is calling clean for resource(nasgw_nfs) because the resource became OFFLINE unexpectedly, on its own.
2013/06/19 17:43:10 VCS INFO V-16-1-50135 User root fired command: hares -clear VIP12  from localhost
2013/06/19 17:43:10 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postonline kyornas051_01 VIPgroup12   successfully
2013/06/19 17:43:12 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount22  Primary  kyornas051_01  from localhost
2013/06/19 17:43:18 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount29  Primary  kyornas051_01  from localhost
2013/06/19 17:43:22 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount23  Primary  kyornas051_01  from localhost
2013/06/19 17:43:37 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount24  Primary  kyornas051_01  from localhost
2013/06/19 17:43:50 VCS INFO V-16-1-50135 User root fired command: hares -modify cfsmount25  Primary  kyornas051_01  from localhost
2013/06/19 17:44:00 VCS INFO V-16-2-13001 (kyornas051_01) Resource(NicMonitor_bond0): Output of the completed operation (monitor)
WARNING: pinging broadcast address
2013/06/19 17:44:08 VCS ERROR V-16-2-13006 (kyornas051_01) Resource(nasgw_nfs): clean procedure did not complete within the expected time.
2013/06/19 17:44:53 VCS INFO V-16-6-15002 (kyornas051_01) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postonline kyornas051_01 VIPgroup1   successfully
2013/06/19 17:45:00 VCS INFO V-16-2-13001 (kyornas051_01) Resource(NicMonitor_bond0): Output of the completed operation (monitor)
WARNING: pinging broadcast address
2013/06/19 17:47:00 VCS INFO V-16-2-13001 (kyornas051_01) Resource(canHostConsole_consoleNIC): Output of the completed operation (monitor)
WARNING: pinging broadcast address
2013/06/19 17:47:54 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP10): online procedure did not complete within the expected time.
2013/06/19 17:47:54 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP10) because online did not complete within the expected time.
2013/06/19 17:47:54 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP11): online procedure did not complete within the expected time.
2013/06/19 17:47:54 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP2): online procedure did not complete within the expected time.
2013/06/19 17:47:54 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP2) because online did not complete within the expected time.
2013/06/19 17:47:54 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP3): online procedure did not complete within the expected time.
2013/06/19 17:47:54 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP11) because online did not complete within the expected time.
2013/06/19 17:47:54 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP3) because online did not complete within the expected time.
2013/06/19 17:47:55 VCS INFO V-16-2-13068 (kyornas051_01) Resource(VIP10) - clean completed successfully.
2013/06/19 17:47:55 VCS INFO V-16-2-13068 (kyornas051_01) Resource(VIP11) - clean completed successfully.
2013/06/19 17:47:55 VCS INFO V-16-2-13068 (kyornas051_01) Resource(VIP3) - clean completed successfully.
2013/06/19 17:47:55 VCS INFO V-16-2-13068 (kyornas051_01) Resource(VIP2) - clean completed successfully.
2013/06/19 17:47:55 VCS INFO V-16-2-13072 (kyornas051_01) Resource(VIP10): Agent is retrying online (attempt number 1 of 2).
2013/06/19 17:47:55 VCS INFO V-16-2-13072 (kyornas051_01) Resource(VIP11): Agent is retrying online (attempt number 1 of 2).
2013/06/19 17:47:55 VCS INFO V-16-2-13072 (kyornas051_01) Resource(VIP2): Agent is retrying online (attempt number 1 of 2).
2013/06/19 17:47:55 VCS INFO V-16-2-13072 (kyornas051_01) Resource(VIP3): Agent is retrying online (attempt number 1 of 2).
2013/06/19 17:48:00 VCS INFO V-16-2-13001 (kyornas051_01) Resource(canHostConsole_consoleNIC): Output of the completed operation (monitor)
WARNING: pinging broadcast address
2013/06/19 17:49:00 VCS INFO V-16-2-13001 (kyornas051_01) Resource(canHostConsole_consoleNIC): Output of the completed operation (monitor)
WARNING: pinging broadcast address
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_priveth1) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(nlmmasterNIC) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(canHostConsole_consoleNIC) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_bond0) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth1) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth4) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(canHostNLM_nlmmaster_device) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth3) - monitor procedure did not complete within the expected time.
2013/06/19 17:50:59 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth2) - monitor procedure did not complete within the expected time.
2013/06/19 17:51:01 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_priveth0) - monitor procedure did not complete within the expected time.
2013/06/19 17:51:08 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(nasgw_smweb) - monitor procedure did not complete within the expected time.
2013/06/19 17:52:01 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(consoleNIC) - monitor procedure did not complete within the expected time.
2013/06/19 17:52:01 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth0) - monitor procedure did not complete within the expected time.
2013/06/19 17:52:01 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(phantomNIC_bond0) - monitor procedure did not complete within the expected time.
2013/06/19 17:52:01 VCS ERROR V-16-2-13027 (kyornas051_01) Resource(NicMonitor_pubeth5) - monitor procedure did not complete within the expected time.
2013/06/19 17:52:56 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP2): online procedure did not complete within the expected time.
2013/06/19 17:52:56 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP3): online procedure did not complete within the expected time.
2013/06/19 17:52:56 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP11): online procedure did not complete within the expected time.
2013/06/19 17:52:56 VCS WARNING V-16-2-13012 (kyornas051_01) Resource(VIP10): online procedure did not complete within the expected time.
2013/06/19 17:52:56 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP11) because online did not complete within the expected time.
2013/06/19 17:52:56 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP10) because online did not complete within the expected time.
2013/06/19 17:52:56 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP2) because online did not complete within the expected time.
2013/06/19 17:52:56 VCS ERROR V-16-2-13065 (kyornas051_01) Agent is calling clean for resource(VIP3) because online did not complete within the expected time.
2013/06/19 17:53:57 VCS ERROR V-16-2-13006 (kyornas051_01) Resource(VIP10): clean procedure did not complete within the expected time.
2013/06/19 17:53:57 VCS ERROR V-16-2-13006 (kyornas051_01) Resource(VIP3): clean procedure did not complete within the expected time.
2013/06/19 17:53:57 VCS ERROR V-16-2-13006 (kyornas051_01) Resource(VIP11): clean procedure did not complete within the expected time.
2013/06/19 17:53:57 VCS ERROR V-16-2-13006 (kyornas051_01) Resource(VIP2): clean procedure did not complete within the expected time.
2013/06/19 17:57:08 VCS ERROR V-16-2-13210 (kyornas051_01) Agent is calling clean for resource(nasgw_smweb) because 4 successive invocations of the monitor procedure did not complete within the expected time.
2013/06/19 17:57:09 VCS INFO V-16-2-13068 (kyornas051_01) Resource(nasgw_smweb) - clean completed successfully.
2013/06/19 17:58:10 VCS ERROR V-16-2-13077 (kyornas051_01) Agent is unable to offline resource(nasgw_smweb). Administrative intervention may be required.
2013/06/19 18:18:42 VCS NOTICE V-16-1-11022 VCS engine (had) started
2013/06/19 18:18:42 VCS INFO V-16-1-10196 Cluster logger started
2013/06/19 18:18:42 VCS NOTICE V-16-1-11050 VCS engine version=5.0
2013/06/19 18:18:42 VCS NOTICE V-16-1-11051 VCS engine join version=5.0.40.0
2013/06/19 18:18:42 VCS NOTICE V-16-1-11052 VCS engine pstamp=Veritas-5.0MP4HF2-08/02/10-23:10:00
2013/06/19 18:18:42 VCS NOTICE V-16-1-10114 Opening GAB library
2013/06/19 18:18:42 VCS NOTICE V-16-1-10619 'HAD' starting on: kyornas051_01
2013/06/19 18:18:42 VCS INFO V-16-1-10125 GAB timeout set to 15000 ms
2013/06/19 18:18:46 VCS INFO V-16-1-10077 Received new cluster membership
2013/06/19 18:18:46 VCS NOTICE V-16-1-10112 System (kyornas051_01) - Membership: 0x1, DDNA: 0x0
2013/06/19 18:18:46 VCS NOTICE V-16-1-10086 System kyornas051_01 (Node '0') is in Regular Membership - Membership: 0x1
2013/06/19 18:18:46 VCS NOTICE V-16-1-10322 System kyornas051_01 (Node '0') changed state from CURRENT_DISCOVER_WAIT to LOCAL_BUILD
2013/06/19 18:18:48 VCS NOTICE V-16-1-10032 VxFEN driver available. Local node id=0
2013/06/19 18:18:48 VCS NOTICE V-16-1-10322 System kyornas051_01 (Node '0') changed state from LOCAL_BUILD to RUNNING
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/Application/ApplicationAgent for resource type Application successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/CFSMount/CFSMountAgent for resource type CFSMount successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/CFSfsckd/CFSfsckdAgent for resource type CFSfsckd successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/CVMCluster/CVMClusterAgent for resource type CVMCluster successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/CVMVolDg/CVMVolDgAgent for resource type CVMVolDg successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/CVMVxconfigd/CVMVxconfigdAgent for resource type CVMVxconfigd successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/IP/IPAgent for resource type IP successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/NFS/NFSAgent for resource type NFS successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/NFSRestart/NFSRestartAgent for resource type NFSRestart successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/NIC/NICAgent for resource type NIC successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/NetBios/NetBiosAgent for resource type NetBios successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/Phantom/PhantomAgent for resource type Phantom successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/Proxy/ProxyAgent for resource type Proxy successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/SambaServer/SambaServerAgent for resource type SambaServer successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/Share/ShareAgent for resource type Share successfully started at Wed Jun 19 18:18:48 2013
2013/06/19 18:18:48 VCS NOTICE V-16-1-10016 Agent /opt/VRTSvcs/bin/HostMonitor for resource type HostMonitor successfully started at Wed Jun 19 18:18:48 2013

 

==================================================

 

I can see that one of the node 51_02 got fenced out due to issue with hearbeat.

But I could not understand , why there was an issue with filesystems. why they were affected by this as this is a parallel cluster

 

 

 

 


NFS share doesn't failover due to being busy

$
0
0
I need a solution

Hello!

We are trying to implement a failover cluster, which hosts database and files on clustered NFS share.
Files are used by the clustered application itself, and by several other hosts.

The problem is, that when active node fails (I mean an ungraceful server shutdown or some clustered service stop), the other hosts still continue to use files on our cluster-hosted NFS share.
That leads to an NFS-share "hanging", when it doesn't work on the first node, and still cannot be brought online of the second node. Other hosts also experience hanging of requests to that NFS share.
Later, I will attach logs, where problem can be observed.

The only possible corrective action found by us is total shutdown and sequential start of all cluster nodes and other hosts.

Please recommend us a best-practice actions, required for using NFS share on veritas cluster server (maybe, some start/stop/clean scripts being included as a cluster resource, or additional cluster configuration options).

Thank you, in advance!

Best regards,
Maxim Semenov.

need details regarding known issue and one case raised with symantec

$
0
0
I need a solution

 

Hi Friends

 

While investigating RCA case for hung in symantec file store NAS issue.

we raised a case with symantec and found that this is a known bug  2384962.

The case we raised with symantec was 418-911-600

 

We are facing similar issue in other system

can someone please help me in getting details about known issue 2384962

and details from case 418-911-600

 

 

Veritas Cluster Server Heartbeat link down, jeapordy state..

$
0
0
I need a solution

Hello Everyone,
I am having this problem with the VCS hearbeat links.

The VCS are being run on a Solaris machine v440. The VCS version is 4.0 on Solaris 9, I know it's old & EOL. Im just hoping to find and pinpoint the soloution to this problem.
The VCS heartbeat links are running on 2 seperate Vlans. This is a 2 node cluster.
Recently the old switch was taken out and a new switch CISCO 3750 was added. The switch shows the cables are connected and I am able to see link up from the switch side.
The links in ce4 of both servers are not linking. Any ideas besides faulty VLAN? How do I test the communications on that particular VLAN?
Here are the results of various commands, any help is apperciated!
Thank you!

#lltstat -n
LLT node information:
    Node                 State    Links
     0    node1          OPEN        1
   * 1    node2          OPEN        2

#lltstat -nvv|head
LLT node information:
    Node                 State    Link  Status  Address
     0 node1          OPEN
                                  ce4   DOWN
                                  ce6   UP      00:03:BA:94:F8:61
   * 1 node2          OPEN
                                  ce4   UP      00:03:BA:94:A4:6F
                                  ce6   UP      00:03:BA:94:A4:71
     2                   CONNWAIT
                                  ce4   DOWN

#lltstat -n
node information:
    Node        LLT no       State    Links
   * 0           node1          OPEN        2
     1           node2          OPEN        1

#lltstat -nvv|head
LLT node information:
    Node                 State    Link  Status  Address
   * 0 node1          OPEN
                                  ce4   UP      00:03:BA:94:F8:5F
                                  ce6   UP      00:03:BA:94:F8:61
     1 node2          OPEN
                                  ce4   DOWN
                                  ce6   UP      00:03:BA:94:A4:71
     2                   CONNWAIT
                                  ce4   DOWN

#gabconfig -a
GAB Port Memberships
===============================================================
Port a gen   49c917 membership 01
Port a gen   49c917   jeopardy ;1
Port h gen   49c91e membership 01
Port h gen   49c91e   jeopardy ;

 

need to know the meaning of logs

$
0
0
I need a solution

 

 

Can sommeone please tell me what has happened?

 

2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 1 (priveth1) node 1 in trouble
2013 Jun 19 17:42:20 kyornas051_01 kernel: LLT INFO V-14-1-10205 link 0 (priveth0) node 1 in trouble
2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 8 sec (1698350566)
2013 Jun 19 17:42:26 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 8 sec (1698356198)
2013 Jun 19 17:42:27 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 9 sec (1698350566)
2013 Jun 19 17:42:27 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 9 sec (1698356198)
2013 Jun 19 17:42:28 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 10 sec (1698350566)
2013 Jun 19 17:42:28 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 10 sec (1698356198)
2013 Jun 19 17:42:29 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 11 sec (1698350566)
2013 Jun 19 17:42:29 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 11 sec (1698356198)
2013 Jun 19 17:42:30 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 12 sec (1698350566)
2013 Jun 19 17:42:30 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 12 sec (1698356198)
2013 Jun 19 17:42:31 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 13 sec (1698350566)
2013 Jun 19 17:42:31 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 13 sec (1698356198)
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 4 more to go.
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 14 sec (1698350566)
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 3 more to go.
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 4 more to go.
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 2 more to go.
2013 Jun 19 17:42:32 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 14 sec (1698356198)
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 3 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 1 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 2 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 1 (priveth1) node 1. 0 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 1 (priveth1) node 1 inactive 15 sec (1698350566)
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 1 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 1 (priveth1) node 1 expired
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (priveth0) node 1. 0 more to go.
2013 Jun 19 17:42:33 kyornas051_01 kernel: LLT INFO V-14-1-10032 link 0 (priveth0) node 1 inactive 15 sec (1698356198)
2013 Jun 19 17:42:34 kyornas051_01 kernel: LLT INFO V-14-1-10509 link 0 (priveth0) node 1 expired
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port h gen  1132317 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port v gen  113231a membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port w gen  113231c membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port a gen  1132305 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port b gen  1132314 membership 0
2013 Jun 19 17:42:38 kyornas051_01 kernel: GAB INFO V-15-1-20036 Port f gen  113231e membership 0
2013 Jun 19 17:42:38 kyornas051_01 Had[30829]: VCS INFO V-16-1-10077 Received new cluster membership
2013 Jun 19 17:42:38 kyornas051_01 kernel: VXFEN INFO V-11-1-68 Completed ejection of leaving node(s) from data disks.
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-7899 CVM_VOLD_CHANGE command received
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-13170 Preempting CM NID 1
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-0 Calling join complete
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-8062 master: not a cluster startup
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-10994 join completed for node 0
2013 Jun 19 17:42:38 kyornas051_01 vxvm:vxconfigd: V-5-1-4123 cluster established successfully
2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10079 System kyornas051_02 (Node \'1\') is in Down State - Membership: 0x1
2013 Jun 19 17:42:39 kyornas051_01 Had[30829]: VCS ERROR V-16-1-10322 System kyornas051_02 (Node \'1\') changed state from RUNNING to FAULTED
2013 Jun 19 17:42:39 kyornas051_01 sfsfs_event.network.alert: Node kyornas051_02 went offline.
2013 Jun 19 17:42:39 kyornas051_01 sshd[17509]: Accepted publickey for root from 172.16.0.3 port 42449 ssh2
2013 Jun 19 17:42:39 kyornas051_01 sshd[17515]: Accepted publickey for root from 172.16.0.3 port 42450 ssh2
2013 Jun 19 17:42:41 kyornas051_01 kernel: vxfs: msgcnt 617 Phase 0 - /dev/vx/dsk/sfsdg/_nlm_ - Blocking buffer reads for recovery. gencnt 1 primary 0 leavers: 0x2 0x0 0x0 0x0
2013 Jun 19 17:42:41 kyornas051_01 kernel:

need to know the VCS resource status if intentional recycle occur for DB

$
0
0
I need a solution

Could you please clarify the below scenerio:-

Our DB admin required to recycle the DB instance from Oracle level (without VCS knowledge). the critical of this resource and dependent of all resources of this is 0. 

Will the resource go OFFLINE/Online (or) FAULTED?

And the same scenerio applied on filesystem resource lets say , unmounting and mounting the file system resource manually by unmount command (without VCS knowledge) where critical is set 0 including all the dependent resource criticality is 0.

Will the resource go OFFLINE/ONLINE (or) FAULTED?

Please provide complete details of how this works.

Thanks

1376604478

Nodes in ADMIN_WAIT_STATE & LEAVING after reboot

$
0
0
No
I need a solution

Hi

I am having a  little knwlodege on vcs , after a reboot of a server i found that node states as ADMIN_WAIT_STATE and other node state as LEAVING this is a 2-NODE CLUSTER

Is this occured due to incorrect configuration file? correct me if iam wrong

please explain me in which situations we find these errors

in which situations we find STALE file/error in MAIN.CF file

Thanks and Regards

 

 

9112371

Never Been to Vision? Tell Us Why and Receive 200 Connect Rewards Points and a New Hoodie

$
0
0
I do not need a solution (just sharing information)

Never attended a Symantec Vision conference?  We want to know why.  For the first 20 people (10 in the US and 10 in Europe) who sign up and participate in a 30-minute interview before August 22nd, we’ll award you 200 Connect points AND send you a custom Vision Hoodie.  To set-up a time, please click here or email your contact information to vision@symantec.com.


Need to configure autorestart

$
0
0
I need a solution

Hi

We are trying to configure VCS on our systems.  We have the groups set up and everything works well in "sunny" day situations, meaning we can offline, online and switch the VCS groups between the servers.  What doesn't work is while testing, we manually shut down an oracle database and/or listener to simulate a disaster, and VCS makes no attempt to restart it.  hagrp -state does show the group as "partial" and then "online" when we manually bring it back up.  But we need VCS to handle this automatically.

What do I need to configure to get this to work?

Thanks,
Laura

 

1376604538

how to replace network adapter cards in the Win2008 vcs cluster?

$
0
0
I need a solution

HI,

WIN2008R2 VCS6.0

i want to know how to change private and public network adapters?need to do more after changing NICs?

ricky

 

 

 

 

How to remove a service group

$
0
0
No
I need a solution

I would like to remove the SG, will the below steps work? hagrp -offline SG –sys server1 hares –delete resources - delete parent resources first since resources are dependent on others Parent - tony, Child - ting So, run command hares -delete tony (first) hagrp –delete SG

9124041

VCS shows resource offline even though it can be started

$
0
0
I need a solution

VCS: 6.0.1

RHEL 6.4

I have several application resources, which show offline even though they are online. I can start them manually, i.e. using the commands configured in the resource. Even the StartProgram works.

Resource description:

 

runfcgipelix3 State                 <host1>  OFFLINE
runfcgipelix3 State                 <host2> OFFLINE
runfcgipelix3 CleanProgram          global        /opt/app_pelix/pelix/fcgi/runfcgipelix3.sh stop
runfcgipelix3 ComputeStats          global        0
runfcgipelix3 ContainerInfo         global        Type Name Enabled
runfcgipelix3 EnvFile               global        
runfcgipelix3 MonitorProcesses      global        
runfcgipelix3 MonitorProgram        global        /opt/app_pelix/bin/runfcgipelix3_status
runfcgipelix3 PidFiles              global        
runfcgipelix3 ResContainerInfo      global        Type Name Enabled
runfcgipelix3 ResourceInfo          global        State Stale Msg TS
runfcgipelix3 ResourceRecipients    global        
runfcgipelix3 StartProgram          global        /opt/app_pelix/pelix/fcgi/runfcgipelix3.sh start
runfcgipelix3 StopProgram           global        /opt/app_pelix/pelix/fcgi/runfcgipelix3.sh stop
runfcgipelix3 TriggerPath           global        
runfcgipelix3 TriggerResRestart     global        0
runfcgipelix3 TriggerResStateChange global        0
runfcgipelix3 TriggersEnabled       global        
runfcgipelix3 UseSUDash             global        0
runfcgipelix3 User                  global        pelix
 
Monitoring script:
 
#!/bin/sh
 
 
#. /etc/rola/minimalenv
. $HOME/.rolaenv
 
WORKDIR="$HOME/fcgi"
SOCKET="socket/fcgipelix3"
CONFIG="FCgiPelix3.conf"
 
 
case "$1" in
start)
$0 stop
sleep 1
cd $WORKDIR
nohup ./FCgiPelix3 -L -u $SOCKET -f $CONFIG < /dev/null \
> /dev/null 2>&1 &
;;
test)
$0 stop
sleep 1
cd $WORKDIR
./FCgiPelix3 -L -u $SOCKET -f $CONFIG
;;
status)
N=`abs FCgiPelix3 | wc -l`
echo "status: $N"
if [ $N -gt 0 ]; then
exit 110
else
exit 100
fi
;;
stop)
abs -k FCgiPelix3
;;
*)
echo "usage: $0 {start|stop|status|test}"
;;
esac
 
Executing the monitoring command:
root@<host1> bin]# su - pelix -c "/opt/app_pelix/pelix/fcgi/runfcgipelix3.sh status"
status: 1
[root@<host1> bin]# echo $?
110
 
The monitoring script does return the correct value, but this does not seem to get picked up by VCS.
Ther are three resources set up like this, none of them work.
 
Thank you for your help.
 

case

$
0
0
I need a solution

hi,

please be kind and give me your comment.

We have here 2 node vcs cluster.
-case a)the vcs is running only one node a.service group a runs on node a and service group b runs on node b.
are there some constraints to start vcs on node b without having split brain?
-case b)vcs runs on node a.on node a run application that normally runs on node a and application that normally runs on node b.
are there some constraints to start vcs on node b without having split brain?
-case)vcs runs on node a and an application running here.on node b vcs is not started.in this case I could not failover the application from a to node b.
tnx a lot,
marius

V-16-2-13027 Oracle Rac

$
0
0
I need a solution

Dears ,

   Engine_A is started to be flooded with  V-16-2-13027 error code , eventually some resources fail after passing multiple failed monitoring cycle , clean is called .

- PrivNIC is configured with 2 interfaces , which are the same two interfaces used for cluster inter-communications .

- lltstat errors shows :

 

  60         Snd not connected
   0          Snd no buffer
   0          Snd stream flow drops
  1636916    Snd no links up
 
- ps -ef don't show hanged moniroing process or something .
- Server performance is fine in general , no recent changes was done .
 
Any ideas ?

VCS, Fencing and licencing

$
0
0
I need a solution

Hi,
Because of price of SFHA we just sold VCS licenses to our customer.
Question is if disk i/o fencing can be used as this is volume manager dependent and might be vxdmp dependent.

Thank you.

0

Cluster upgrade from version 5 to 6

$
0
0
I need a solution

Hello,

 

we would like to upgrade veritas cluster 5 to 6 and. Some of the service groups are running parrallely. What is the best way to upgrade it?

Also, can we run veritas cluster 5.1 and 6 on other node in 2 nodes cluster?

CFS access is blocking after both heartbeat links are down

$
0
0
I need a solution

Hi

I have a server cluster environment with VCS 6.0.2, 6 servers constitute the cluster, I/O fencing is configured with 3 coordinator disks. If I cold boot one of server, I found CFS access on other running server is blocking in a period.

I found CFS access starts being blocked when below logs appear in /var/log/messages

  LLT INFO V-14-1-10205 link 0 (eth6.109) node 0 in trouble

  LLT INFO V-14-1-10205 link 1 (eth7.110) node 0 in trouble

And access allowing when below logs appear in /var/log/messages

vxfs: msgcnt 8 Phase 2 - /dev/vx/dsk/filedg/filevol - Buffer reads allowed.

vxfs: msgcnt 9 Phase 9 - /dev/vx/dsk/filedg/filevol - Set Primary nodeid to 2

vxglm INFO V-42-106 GLM recovery complete, gen f59d30, mbr 2c/0/0/0

vxglm INFO V-42-107 times: skew 2673 ms, remaster 78 ms, completion 40 ms

I think the CFS access blocking is for data protection, but as my observation, CFS access blocking may continue 10+ seconds on running servers, so my questions are:

1. Is this the correct behaviour for VCS to block CFS access 10+ seconds?

2. Why not start CFS access blocking after heartbeat link being expired and before racing coordinator disks.

 

Thanks in advance!

vcs upgrade

$
0
0
I need a solution

Hi,

Please I like to find out how I should upgrade vcs on 2 nodes cluster.I run that on solaris 10.

The steps are like:
- hastop -local -evacuate ;stop vcs on node a and evacuate group on node b who runs vcs.
- init 1, single user in node a
- apply patch
- reboot node a
- hastart ;start vcs on node a
- hastop -local -evacuate;stop vcs on node b and evacuate groups on node a
- init 1;node b on single user mode
-apply patch
- reboot node b
- hagrp -switch to

tnx so much,
marius

SCSI-3 PR for IO fencing not supported on VMware ESXi Virtual machines?

$
0
0
I need a solution

We have 6 guest OS (redhat 6.3) on 3 physical machine running VMware ESX5.0,each physical machine has 2 VMs.

And SFCFSHA has been installed.

The fencing disks are three 1GB LUN from physical arrays via VMware RDM physical mode.

but vxfentsthdw test shows:

Testing quanziweb01 /dev/vx/rdmp/ibm_ds8x000_0088 quanziweb02 /dev/vx/rdmp/ibm_ds8x000_0088
 
Evaluate the disk before testing  ........................ No Pre-existing keys
RegisterIgnoreKeys on disk /dev/vx/rdmp/ibm_ds8x000_0088 from node quanziweb01  Passed
Verify registrations for disk /dev/vx/rdmp/ibm_ds8x000_0088 on node quanziweb01  Passed
RegisterIgnoreKeys on disk /dev/vx/rdmp/ibm_ds8x000_0088 from node quanziweb02  Passed
Verify registrations for disk /dev/vx/rdmp/ibm_ds8x000_0088 on node quanziweb02  Passed
Unregister keys on disk /dev/vx/rdmp/ibm_ds8x000_0088 from node quanziweb01  Passed
Verify registrations for disk /dev/vx/rdmp/ibm_ds8x000_0088 on node quanziweb02  Failed
 
Unregistration test for disk  failed on node quanziweb02.
Unregistration from one node is causing unregistration of keys from the other node.
Disk  is not SCSI-3 compliant on node quanziweb02.
Execute the utility vxfentsthdw again and if failure persists contact
the vendor for support in enabling SCSI-3 persistent reservations
================================================================================

I'm sure that the three 1GB LUN from the disk array support  SCSI-3 persistent reservations IF used in physical machines.

But why they do not support  SCSI-3 PR when used in VMware ESX virtual machines?

Is there any up-to-date official guide for configuring I/O fencing for VCS/CFS on VMware ESX?

Thanks!

Need to know the root cause for the service outage in vcs cluster

$
0
0
I need a solution

Hello Friends,

I need your help in finding  out what has caused the service outage and how can we overcome this situation

 

logs from message file :--> /var/adm/messages

==========================================

 

Sep  2 23:35:04 duadm2 genunix: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 0 (oce1) node 0 in trouble
Sep  2 23:35:05 duadm2 genunix: [ID 140958 kern.notice] LLT INFO V-14-1-10205 link 2 (oce0) node 0 in trouble
Sep  2 23:35:09 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 8 sec (2883648)
Sep  2 23:35:10 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 8 sec (6644221)
Sep  2 23:35:10 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 9 sec (2883648)
Sep  2 23:35:11 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 9 sec (6644221)
Sep  2 23:35:11 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 10 sec (2883648)
Sep  2 23:35:12 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 10 sec (6644221)
Sep  2 23:35:12 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 11 sec (2883648)
Sep  2 23:35:13 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 11 sec (6644221)
Sep  2 23:35:13 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 12 sec (2883648)
Sep  2 23:35:14 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 12 sec (6644221)
Sep  2 23:35:14 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 13 sec (2883648)
Sep  2 23:35:15 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 13 sec (6644221)
Sep  2 23:35:15 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 14 sec (2883648)
Sep  2 23:35:15 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 2 (oce0) node 0. 4 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 2 (oce0) node 0. 3 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (oce1) node 0. 4 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 2 (oce0) node 0. 2 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 14 sec (6644221)
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (oce1) node 0. 3 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 2 (oce0) node 0. 1 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 2 (oce0) node 0 inactive 15 sec (2883648)
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (oce1) node 0. 2 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 2 (oce0) node 0. 0 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (oce1) node 0. 1 more to go.
Sep  2 23:35:16 duadm2 genunix: [ID 205468 kern.notice] LLT INFO V-14-1-10509 link 2 (oce0) node 0 expired
Sep  2 23:35:17 duadm2 genunix: [ID 592107 kern.notice] LLT INFO V-14-1-10510 sent hbreq (NULL) on link 0 (oce1) node 0. 0 more to go.
Sep  2 23:35:17 duadm2 genunix: [ID 487101 kern.notice] LLT INFO V-14-1-10032 link 0 (oce1) node 0 inactive 15 sec (6644221)
Sep  2 23:35:17 duadm2 genunix: [ID 205468 kern.notice] LLT INFO V-14-1-10509 link 0 (oce1) node 0 expired
Sep  2 23:35:21 duadm2 genunix: [ID 316943 kern.notice] GAB INFO V-15-1-20036 Port h gen   288706 membership 01
Sep  2 23:35:21 duadm2 genunix: [ID 608499 kern.notice] GAB INFO V-15-1-20037 Port h gen   288706   jeopardy ;1
Sep  2 23:35:21 duadm2 genunix: [ID 316943 kern.notice] GAB INFO V-15-1-20036 Port a gen   288701 membership 01
Sep  2 23:35:21 duadm2 genunix: [ID 608499 kern.notice] GAB INFO V-15-1-20037 Port a gen   288701   jeopardy ;1
Sep  2 23:35:21 duadm2 genunix: [ID 316943 kern.notice] GAB INFO V-15-1-20036 Port b gen   288704 membership 01
Sep  2 23:35:21 duadm2 genunix: [ID 608499 kern.notice] GAB INFO V-15-1-20037 Port b gen   288704   jeopardy ;1
Sep  2 23:35:21 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS INFO V-16-1-10077 Received new cluster membership
Sep  2 23:35:21 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10111 System duadm2 (Node '1') is in Regular and Jeopardy Memberships - Membership: 0x3
, Jeopardy: 0x2
Sep  2 23:56:50 duadm2 in.mpathd[6542]: [ID 585766 daemon.error] Cannot meet requested failure detection time of 10000 ms on (inet oce2) new failure detection time
for group "stor_mnic" is 48426 ms
Sep  2 23:56:57 duadm2 in.mpathd[6542]: [ID 594170 daemon.error] NIC failure detected on oce0 of group pub_mnic
Sep  2 23:56:57 duadm2 in.mpathd[6542]: [ID 832587 daemon.error] Successfully failed over from NIC oce0 to NIC oce9
Sep  2 23:57:01 duadm2 in.mpathd[6542]: [ID 299542 daemon.error] NIC repair detected on oce0 of group pub_mnic
Sep  2 23:57:01 duadm2 in.mpathd[6542]: [ID 620804 daemon.error] Successfully failed back to NIC oce0
Sep  2 23:57:10 duadm2 in.mpathd[6542]: [ID 594170 daemon.error] NIC failure detected on oce0 of group pub_mnic
Sep  2 23:57:10 duadm2 in.mpathd[6542]: [ID 832587 daemon.error] Successfully failed over from NIC oce0 to NIC oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.027 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.030 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.027 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.030 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.027 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 876157 kern.warning] WARNING: node 44:1e:a1:74:e5:46 is using our IP address 010.001.014.030 on oce9
Sep  2 23:57:10 duadm2 ip: [ID 567813 kern.warning] WARNING: oce9:2 has duplicate address 010.001.014.027 (claimed by 44:1e:a1:74:e5:46); disabled
Sep  2 23:57:10 duadm2 ip: [ID 567813 kern.warning] WARNING: oce9:1 has duplicate address 010.001.014.030 (claimed by 44:1e:a1:74:e5:46); disabled
Sep  2 23:57:11 duadm2 in.mpathd[6542]: [ID 299542 daemon.error] NIC repair detected on oce0 of group pub_mnic
Sep  2 23:57:11 duadm2 in.mpathd[6542]: [ID 620804 daemon.error] Successfully failed back to NIC oce0
Sep  2 23:57:20 duadm2 in.mpathd[6542]: [ID 168056 daemon.error] All Interfaces in group pub_mnic have failed
Sep  2 23:57:27 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on s
ys duadm2
Sep  2 23:57:28 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10205 Group PubLan is faulted on system duadm2
Sep  2 23:57:29 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on s
ys duadm1
Sep  2 23:57:30 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource stor_mnic (Owner: Unspecified, Group: StorLan) is FAULTED (timed out) on
 sys duadm1
Sep  2 23:57:31 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10205 Group PubLan is faulted on system duadm1
Sep  2 23:57:53 duadm2 in.mpathd[6542]: [ID 594170 daemon.error] NIC failure detected on oce11 of group stor_mnic
Sep  2 23:57:53 duadm2 in.mpathd[6542]: [ID 832587 daemon.error] Successfully failed over from NIC oce11 to NIC oce2
Sep  2 23:57:57 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource ossfs_p1 (Owner: Unspecified, Group: Ossfs) is FAULTED (timed out) on sy
s duadm1
Sep  2 23:57:57 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource syb1_p1 (Owner: Unspecified, Group: Sybase1) is FAULTED (timed out) on s
ys duadm1
Sep  2 23:57:58 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(ossfs_ip) because the resource becam
e OFFLINE unexpectedly, on its own.
Sep  2 23:57:59 duadm2 in.mpathd[6542]: [ID 168056 daemon.error] All Interfaces in group stor_mnic have failed
Sep  2 23:58:02 duadm2 AgentFramework[6332]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 Thread(8) Agent is calling clean for resource(syb1_ip) because the res
ource became OFFLINE unexpectedly, on its own.
Sep  2 23:58:02 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 (duadm2) Agent is calling clean for resource(syb1_ip) because the resource became
 OFFLINE unexpectedly, on its own.
Sep  2 23:58:02 duadm2 AgentFramework[6332]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13068 Thread(8) Resource(syb1_ip) - clean completed successfully.
Sep  2 23:58:05 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource syb1_p1 (Owner: Unspecified, Group: Sybase1) is FAULTED (timed out) on s
ys duadm2
Sep  2 23:58:05 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource ossfs_p1 (Owner: Unspecified, Group: Ossfs) is FAULTED (timed out) on sy
s duadm2
Sep  2 23:58:08 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource stor_mnic (Owner: Unspecified, Group: StorLan) is FAULTED (timed out) on
 sys duadm2
Sep  2 23:58:20 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP ser
ver
Sep  2 23:58:21 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(stor_p) because the resource became
OFFLINE unexpectedly, on its own.
Sep  2 23:58:21 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13070 (duadm1) Resource(stor_p) - clean not implemented.
Sep  2 23:58:22 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10205 Group StorLan is faulted on system duadm1
Sep  2 23:58:24 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13027 (duadm1) Resource(cluster_maint) - monitor procedure did not complete within the
expected time.
Sep  2 23:58:30 duadm2 AgentFramework[6345]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 Thread(3) Agent is calling clean for resource(stor_p) because the reso
urce became OFFLINE unexpectedly, on its own.
Sep  2 23:58:30 duadm2 AgentFramework[6345]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13070 Thread(3) Resource(stor_p) - clean not implemented.
Sep  2 23:58:30 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13067 (duadm2) Agent is calling clean for resource(stor_p) because the resource became
OFFLINE unexpectedly, on its own.
Sep  2 23:58:30 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-2-13070 (duadm2) Resource(stor_p) - clean not implemented.
Sep  2 23:58:31 duadm2 Had[6267]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10205 Group StorLan is faulted on system duadm2
Sep  2 23:59:19 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP ser
ver
Sep  2 23:59:19 duadm2 ldap_cachemgr[516]: [ID 545954 daemon.error] libsldap: makeConnection: failed to open connection to 10.1.14.35
Sep  2 23:59:19 duadm2 ldap_cachemgr[516]: [ID 687686 daemon.warning] libsldap: Falling back to anonymous, non-SSL mode for __ns_ldap_getRootDSE. openConnection: si
mple bind failed - Can't connect to the LDAP server
Sep  2 23:59:20 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP ser
ver
Sep  2 23:59:20 duadm2 ldap_cachemgr[516]: [ID 292100 daemon.warning] libsldap: could not remove 10.1.14.35 from servers list
Sep  2 23:59:20 duadm2 ldap_cachemgr[516]: [ID 292100 daemon.warning] libsldap: could not remove 10.1.14.37 from servers list
Sep  2 23:59:20 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 7  Mesg: Session error no available conn.
Sep  2 23:59:20 duadm2 ldap_cachemgr[516]: [ID 186574 daemon.error] Error: Unable to refresh profile:default: Session error no available conn.
Sep  3 00:00:08 duadm2 svc.startd[9]: [ID 122153 daemon.warning] svc:/ericsson/eric_monitor/ddc:default: Method or service exit timed out.  Killing contract 145156.
Sep  3 00:00:08 duadm2 svc.startd[9]: [ID 636263 daemon.warning] svc:/ericsson/eric_monitor/ddc:default: Method "/opt/ERICddc/bin/ddc stop" failed due to signal KIL
L.
Sep  3 00:00:19 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP ser
ver
Sep  3 00:00:19 duadm2 ldap_cachemgr[516]: [ID 545954 daemon.error] libsldap: makeConnection: failed to open connection to 10.1.14.35
Sep  3 00:00:19 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP ser
ver
Sep  3 00:00:19 duadm2 ldap_cachemgr[516]: [ID 545954 daemon.error] libsldap: makeConnection: failed to open connection to 10.1.14.37
Sep  3 00:00:19 duadm2 ldap_cachemgr[516]: [ID 687686 daemon.warning] libsldap: Falling back to anonymous, non-SSL mode for __ns_ldap_getRootDSE. openConnection: si
mple bind failed - Can't connect to the LDAP server
Sep  3 00:00:20 duadm2 ldap_cachemgr[516]: [ID 293258 daemon.warning] libsldap: Status: 91  Mesg: openConnection: simple bind failed - Can't connect to the LDAP server
Sep  3 00:00:20 duadm2 last message repeated 1 time
Sep  3 00:00:20 duadm2 ldap_cachemgr[516]: [ID 545954 daemon.error] libsldap: makeConnection: failed to open connection to 10.1.14.35
Sep  3 00:00:20 duadm2 ldap_cachemgr[516]: [ID 545954 daemon.error] libsldap: makeConnection: failed to open connection to 10.1.14.37
Sep  3 00:00:20 duadm2 ldap_cachemgr[516]: [ID 687686 daemon.warning] libsldap: Falling back to anonymous, non-SSL mode for __ns_ldap_getRootDSE. openConnection: simple bind failed - Can't connect to the LDAP server
Sep  3 00:00:20 duadm2 last message repeated 1 time

 

============================================================

 

engine logs :-->

 

2013/09/02 23:35:21 VCS INFO V-16-1-10077 Received new cluster membership
2013/09/02 23:35:21 VCS NOTICE V-16-1-10112 System (duadm1) - Membership: 0x3, DDNA: 0x2
2013/09/02 23:35:21 VCS ERROR V-16-1-10111 System duadm2 (Node '1') is in Regular and Jeopardy Memberships - Membership: 0x3, Jeopardy: 0x2
2013/09/02 23:35:25 VCS WARNING V-16-1-11141 LLT heartbeat link status changed. Previous status = oce1 UP oce8 UP oce0 UP; Current status = oce1 DOWN oce8 UP oce0 DOWN.
2013/09/02 23:57:27 VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on sys duadm2
2013/09/02 23:57:27 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pub_p (Owner: Unspecified, Group: PubLan) on System duadm2
2013/09/02 23:57:27 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=pub_mnic, arg2=ONLINE
2013/09/02 23:57:27 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=pub_mnic
2013/09/02 23:57:27 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 pub_mnic ONLINE  successfully
2013/09/02 23:57:28 VCS INFO V-16-1-10305 Resource pub_p (Owner: Unspecified, Group: PubLan) is offline on duadm2 (VCS initiated)
2013/09/02 23:57:28 VCS ERROR V-16-1-10205 Group PubLan is faulted on system duadm2
2013/09/02 23:57:28 VCS NOTICE V-16-1-10446 Group PubLan is offline on system duadm2
2013/09/02 23:57:28 VCS INFO V-16-1-10493 Evaluating duadm1 as potential target node for group PubLan
2013/09/02 23:57:28 VCS INFO V-16-1-50010 Group PubLan is online or faulted on system duadm1
2013/09/02 23:57:28 VCS INFO V-16-1-10493 Evaluating duadm2 as potential target node for group PubLan
2013/09/02 23:57:28 VCS INFO V-16-1-50010 Group PubLan is online or faulted on system duadm2
2013/09/02 23:57:28 VCS NOTICE V-16-1-10235 Restart is set for group PubLan. Group will be brought online if fault on persistent resource clears. If group is brought online anywhere else from AutoStartList or manually, then Restart will be reset
2013/09/02 23:57:28 VCS INFO V-16-6-0 (duadm2) postoffline:(postoffline) Invoked with arg0=duadm2, arg1=PubLan
2013/09/02 23:57:28 VCS INFO V-16-2-13075 (duadm1) Resource(ossfs_ip) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(1).
2013/09/02 23:57:28 VCS INFO V-16-6-0 (duadm2) postoffline:Executing /ericsson/core/cluster/scripts/postoffline.sh with arg0=duadm2, arg1=PubLan
2013/09/02 23:57:28 VCS INFO V-16-6-0 (duadm2) postoffline.sh:PubLan:Nothing done
2013/09/02 23:57:28 VCS INFO V-16-6-0 (duadm2) postoffline:Completed execution of /ericsson/core/cluster/scripts/postoffline.sh for group PubLan
2013/09/02 23:57:29 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline duadm2 PubLan   successfully
2013/09/02 23:57:29 VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on sys duadm1
2013/09/02 23:57:29 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pub_p (Owner: Unspecified, Group: PubLan) on System duadm1
2013/09/02 23:57:30 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=pub_mnic, arg2=ONLINE
2013/09/02 23:57:30 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=pub_mnic
2013/09/02 23:57:30 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 pub_mnic ONLINE  successfully
2013/09/02 23:57:31 VCS ERROR V-16-1-10303 Resource stor_mnic (Owner: Unspecified, Group: StorLan) is FAULTED (timed out) on sys duadm1
2013/09/02 23:57:31 VCS WARNING V-16-1-13310 Offlining parent group DDCMon on system duadm1
2013/09/02 23:57:31 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ddc_app (Owner: Unspecified, Group: DDCMon) on System duadm1
2013/09/02 23:57:31 VCS NOTICE V-16-1-10300 Initiating Offline of Resource hyperic_app (Owner: Unspecified, Group: DDCMon) on System duadm1
2013/09/02 23:57:31 VCS INFO V-16-1-10305 Resource pub_p (Owner: Unspecified, Group: PubLan) is offline on duadm1 (VCS initiated)
2013/09/02 23:57:31 VCS ERROR V-16-1-10205 Group PubLan is faulted on system duadm1
2013/09/02 23:57:31 VCS NOTICE V-16-1-10446 Group PubLan is offline on system duadm1
2013/09/02 23:57:31 VCS INFO V-16-1-10493 Evaluating duadm1 as potential target node for group PubLan
2013/09/02 23:57:31 VCS INFO V-16-1-50010 Group PubLan is online or faulted on system duadm1
2013/09/02 23:57:31 VCS INFO V-16-1-10493 Evaluating duadm2 as potential target node for group PubLan
2013/09/02 23:57:31 VCS INFO V-16-1-50010 Group PubLan is online or faulted on system duadm2
2013/09/02 23:57:31 VCS NOTICE V-16-1-10235 Restart is set for group PubLan. Group will be brought online if fault on persistent resource clears. If group is brought online anywhere else from AutoStartList or manually, then Restart will be reset
2013/09/02 23:57:31 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=stor_mnic, arg2=ONLINE
2013/09/02 23:57:31 VCS INFO V-16-6-0 (duadm1) postoffline:(postoffline) Invoked with arg0=duadm1, arg1=PubLan
2013/09/02 23:57:31 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=stor_mnic
2013/09/02 23:57:31 VCS INFO V-16-6-0 (duadm1) postoffline:Executing /ericsson/core/cluster/scripts/postoffline.sh with arg0=duadm1, arg1=PubLan
2013/09/02 23:57:31 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 stor_mnic ONLINE  successfully
2013/09/02 23:57:31 VCS INFO V-16-6-0 (duadm1) postoffline.sh:PubLan:Nothing done
2013/09/02 23:57:31 VCS INFO V-16-6-0 (duadm1) postoffline:Completed execution of /ericsson/core/cluster/scripts/postoffline.sh for group PubLan
2013/09/02 23:57:31 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline duadm1 PubLan   successfully
2013/09/02 23:57:32 VCS INFO V-16-2-13075 (duadm2) Resource(syb1_ip) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(1).
2013/09/02 23:57:41 VCS INFO V-16-2-13075 (duadm1) Resource(pms_ip) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(1).
2013/09/02 23:57:41 VCS INFO V-16-2-13075 (duadm1) Resource(snmp_ip) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(1).
2013/09/02 23:57:41 VCS INFO V-16-2-13075 (duadm1) Resource(cms_ip) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(1).
2013/09/02 23:57:57 VCS ERROR V-16-1-10303 Resource ossfs_p1 (Owner: Unspecified, Group: Ossfs) is FAULTED (timed out) on sys duadm1
2013/09/02 23:57:57 VCS NOTICE V-16-1-10300 Initiating Offline of Resource snmp_ip (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/02 23:57:57 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pms_ip (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/02 23:57:57 VCS NOTICE V-16-1-10300 Initiating Offline of Resource stop_oss (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/02 23:57:57 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cms_ip (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/02 23:57:57 VCS ERROR V-16-1-10303 Resource syb1_p1 (Owner: Unspecified, Group: Sybase1) is FAULTED (timed out) on sys duadm1
2013/09/02 23:57:58 VCS INFO V-16-1-10305 Resource snmp_ip (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/02 23:57:58 VCS INFO V-16-1-10305 Resource pms_ip (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/02 23:57:58 VCS INFO V-16-1-10305 Resource cms_ip (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/02 23:57:58 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=syb1_p1, arg2=ONLINE
2013/09/02 23:57:58 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=ossfs_p1, arg2=ONLINE
2013/09/02 23:57:58 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=syb1_p1
2013/09/02 23:57:58 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=ossfs_p1
2013/09/02 23:57:58 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 syb1_p1 ONLINE  successfully
2013/09/02 23:57:58 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 ossfs_p1 ONLINE  successfully
2013/09/02 23:57:58 VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(ossfs_ip) because the resource became OFFLINE unexpectedly, on its own.
2013/09/02 23:57:58 VCS INFO V-16-2-13068 (duadm1) Resource(ossfs_ip) - clean completed successfully.
2013/09/02 23:57:59 VCS INFO V-16-1-10307 Resource ossfs_ip (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (Not initiated by VCS)
2013/09/02 23:57:59 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=ossfs_ip, arg2=ONLINE
2013/09/02 23:57:59 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=ossfs_ip
2013/09/02 23:57:59 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 ossfs_ip ONLINE  successfully
2013/09/02 23:58:02 VCS ERROR V-16-2-13067 (duadm2) Agent is calling clean for resource(syb1_ip) because the resource became OFFLINE unexpectedly, on its own.
2013/09/02 23:58:02 VCS INFO V-16-2-13068 (duadm2) Resource(syb1_ip) - clean completed successfully.
2013/09/02 23:58:02 VCS INFO V-16-1-10307 Resource syb1_ip (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (Not initiated by VCS)
2013/09/02 23:58:02 VCS NOTICE V-16-1-10300 Initiating Offline of Resource stop_sybase (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/02 23:58:02 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=syb1_ip, arg2=ONLINE
2013/09/02 23:58:02 VCS INFO V-16-10001-88 (duadm2) Application:stop_sybase:offline:Executed [/ericsson/core/cluster/scripts/stop_sybase.sh stop] successfully.
2013/09/02 23:58:02 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=syb1_ip
2013/09/02 23:58:02 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 syb1_ip ONLINE  successfully
2013/09/02 23:58:03 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss  duadm2  from localhost
2013/09/02 23:58:03 VCS NOTICE V-16-1-10208 Initiating switch of group Oss from system duadm1 to system duadm2
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource activemq (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource activemq_oss_loggingbroker (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource alex (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource apache (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cron (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource fmria (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource glassfish (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource imgr_tomcat (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource restart_mc (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syb_log_mon (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syb_proc_mon (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource trapdist (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS NOTICE V-16-1-10300 Initiating Offline of Resource vrsnt_log_mon (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:58:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/02 23:58:03 VCS INFO V-16-10001-88 (duadm1) Application:cron:offline:Executed [/ericsson/core/cluster/scripts/cron.sh stop] successfully.
2013/09/02 23:58:03 VCS INFO V-16-10001-88 (duadm1) Application:alex:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/alex:default] successfully.
2013/09/02 23:58:03 VCS INFO V-16-10001-88 (duadm1) Application:apache:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /network/http:apache2] successfully.
2013/09/02 23:58:05 VCS INFO V-16-10001-88 (duadm1) Application:restart_mc:offline:Executed [/ericsson/core/cluster/scripts/restart_mc.sh stop] successfully.
2013/09/02 23:58:05 VCS ERROR V-16-1-10303 Resource syb1_p1 (Owner: Unspecified, Group: Sybase1) is FAULTED (timed out) on sys duadm2
2013/09/02 23:58:05 VCS ERROR V-16-1-10303 Resource ossfs_p1 (Owner: Unspecified, Group: Ossfs) is FAULTED (timed out) on sys duadm2
2013/09/02 23:58:05 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=syb1_p1, arg2=ONLINE
2013/09/02 23:58:05 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=ossfs_p1, arg2=ONLINE
2013/09/02 23:58:05 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=syb1_p1
2013/09/02 23:58:05 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=ossfs_p1
2013/09/02 23:58:05 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 syb1_p1 ONLINE  successfully
2013/09/02 23:58:05 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 ossfs_p1 ONLINE  successfully
2013/09/02 23:58:05 VCS INFO V-16-1-10305 Resource stop_sybase (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/02 23:58:05 VCS NOTICE V-16-1-10300 Initiating Offline of Resource masterdataservice_BACKUP (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/02 23:58:08 VCS ERROR V-16-1-10303 Resource stor_mnic (Owner: Unspecified, Group: StorLan) is FAULTED (timed out) on sys duadm2
2013/09/02 23:58:08 VCS WARNING V-16-1-13310 Offlining parent group DDCMon on system duadm2
2013/09/02 23:58:08 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ddc_app (Owner: Unspecified, Group: DDCMon) on System duadm2
2013/09/02 23:58:08 VCS NOTICE V-16-1-10300 Initiating Offline of Resource hyperic_app (Owner: Unspecified, Group: DDCMon) on System duadm2
2013/09/02 23:58:08 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=stor_mnic, arg2=ONLINE
2013/09/02 23:58:08 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=stor_mnic
2013/09/02 23:58:08 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 stor_mnic ONLINE  successfully
2013/09/02 23:58:21 VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(stor_p) because the resource became OFFLINE unexpectedly, on its own.
2013/09/02 23:58:21 VCS ERROR V-16-2-13070 (duadm1) Resource(stor_p) - clean not implemented.
2013/09/02 23:58:22 VCS INFO V-16-1-10307 Resource stor_p (Owner: Unspecified, Group: StorLan) is offline on duadm1 (Not initiated by VCS)
2013/09/02 23:58:22 VCS ERROR V-16-1-10205 Group StorLan is faulted on system duadm1
2013/09/02 23:58:22 VCS NOTICE V-16-1-10446 Group StorLan is offline on system duadm1
2013/09/02 23:58:22 VCS INFO V-16-1-10493 Evaluating duadm1 as potential target node for group StorLan
2013/09/02 23:58:22 VCS INFO V-16-1-50010 Group StorLan is online or faulted on system duadm1
2013/09/02 23:58:22 VCS INFO V-16-1-10493 Evaluating duadm2 as potential target node for group StorLan
2013/09/02 23:58:22 VCS INFO V-16-1-50010 Group StorLan is online or faulted on system duadm2
2013/09/02 23:58:22 VCS NOTICE V-16-1-10235 Restart is set for group StorLan. Group will be brought online if fault on persistent resource clears. If group is brought online anywhere else from AutoStartList or manually, then Restart will be reset
2013/09/02 23:58:22 VCS INFO V-16-6-0 (duadm1) resfault:(resfault) Invoked with arg0=duadm1, arg1=stor_p, arg2=ONLINE
2013/09/02 23:58:22 VCS INFO V-16-6-0 (duadm1) postoffline:(postoffline) Invoked with arg0=duadm1, arg1=StorLan
2013/09/02 23:58:22 VCS INFO V-16-0 (duadm1) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm1 ,arg2=stor_p
2013/09/02 23:58:22 VCS INFO V-16-6-0 (duadm1) postoffline:Executing /ericsson/core/cluster/scripts/postoffline.sh with arg0=duadm1, arg1=StorLan
2013/09/02 23:58:22 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm1 stor_p ONLINE  successfully
2013/09/02 23:58:22 VCS INFO V-16-6-0 (duadm1) postoffline.sh:StorLan:Nothing done
2013/09/02 23:58:22 VCS INFO V-16-6-0 (duadm1) postoffline:Completed execution of /ericsson/core/cluster/scripts/postoffline.sh for group StorLan
2013/09/02 23:58:22 VCS INFO V-16-6-15002 (duadm1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline duadm1 StorLan   successfully
2013/09/02 23:58:24 VCS ERROR V-16-2-13027 (duadm1) Resource(cluster_maint) - monitor procedure did not complete within the expected time.
2013/09/02 23:58:30 VCS ERROR V-16-2-13067 (duadm2) Agent is calling clean for resource(stor_p) because the resource became OFFLINE unexpectedly, on its own.
2013/09/02 23:58:30 VCS ERROR V-16-2-13070 (duadm2) Resource(stor_p) - clean not implemented.
2013/09/02 23:58:31 VCS INFO V-16-1-10307 Resource stor_p (Owner: Unspecified, Group: StorLan) is offline on duadm2 (Not initiated by VCS)
2013/09/02 23:58:31 VCS ERROR V-16-1-10205 Group StorLan is faulted on system duadm2
2013/09/02 23:58:31 VCS NOTICE V-16-1-10446 Group StorLan is offline on system duadm2
2013/09/02 23:58:31 VCS INFO V-16-1-10493 Evaluating duadm1 as potential target node for group StorLan
2013/09/02 23:58:31 VCS INFO V-16-1-50010 Group StorLan is online or faulted on system duadm1
2013/09/02 23:58:31 VCS INFO V-16-1-10493 Evaluating duadm2 as potential target node for group StorLan
2013/09/02 23:58:31 VCS INFO V-16-1-50010 Group StorLan is online or faulted on system duadm2
2013/09/02 23:58:31 VCS NOTICE V-16-1-10235 Restart is set for group StorLan. Group will be brought online if fault on persistent resource clears. If group is brought online anywhere else from AutoStartList or manually, then Restart will be reset
2013/09/02 23:58:31 VCS INFO V-16-6-0 (duadm2) resfault:(resfault) Invoked with arg0=duadm2, arg1=stor_p, arg2=ONLINE
2013/09/02 23:58:31 VCS INFO V-16-6-0 (duadm2) postoffline:(postoffline) Invoked with arg0=duadm2, arg1=StorLan
2013/09/02 23:58:31 VCS INFO V-16-0 (duadm2) resfault:(resfault.sh) Invoked with arg0=/ericsson/core/cluster/scripts/resfault.sh, arg1=duadm2 ,arg2=stor_p
2013/09/02 23:58:31 VCS INFO V-16-6-0 (duadm2) postoffline:Executing /ericsson/core/cluster/scripts/postoffline.sh with arg0=duadm2, arg1=StorLan
2013/09/02 23:58:31 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resfault duadm2 stor_p ONLINE  successfully
2013/09/02 23:58:31 VCS INFO V-16-6-0 (duadm2) postoffline.sh:StorLan:Nothing done
2013/09/02 23:58:31 VCS INFO V-16-6-0 (duadm2) postoffline:Completed execution of /ericsson/core/cluster/scripts/postoffline.sh for group StorLan
2013/09/02 23:58:31 VCS INFO V-16-6-15002 (duadm2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline duadm2 StorLan   successfully
2013/09/02 23:59:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/02 23:59:33 VCS INFO V-16-10001-88 (duadm1) Application:syb_log_mon:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/sybase_log_monitor:default] successfully.
2013/09/02 23:59:33 VCS INFO V-16-10001-88 (duadm1) Application:syb_proc_mon:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/sybase_process_monitor:default] successfully.
2013/09/02 23:59:34 VCS INFO V-16-10001-88 (duadm1) Application:fmria:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_ep/riadaemon:default] successfully.
2013/09/02 23:59:35 VCS INFO V-16-10001-88 (duadm1) Application:trapdist:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_ep/trapd:default] successfully.
2013/09/02 23:59:35 VCS INFO V-16-10001-88 (duadm1) Application:vrsnt_log_mon:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/versant_log_monitor:default] successfully.
2013/09/02 23:59:36 VCS INFO V-16-1-10305 Resource alex (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:37 VCS INFO V-16-10001-88 (duadm1) Application:imgr_tomcat:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/tomcat:default] successfully.
2013/09/02 23:59:37 VCS INFO V-16-1-10305 Resource apache (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:38 VCS INFO V-16-1-10305 Resource cron (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:40 VCS INFO V-16-2-13075 (duadm1) Resource(tomcat) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(2).
2013/09/02 23:59:50 VCS INFO V-16-1-10305 Resource restart_mc (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:50 VCS NOTICE V-16-1-10300 Initiating Offline of Resource smssr (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/02 23:59:50 VCS INFO V-16-1-10305 Resource syb_proc_mon (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:52 VCS INFO V-16-1-10305 Resource syb_log_mon (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:52 VCS INFO V-16-1-10305 Resource trapdist (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:52 VCS INFO V-16-1-10305 Resource fmria (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:54 VCS INFO V-16-1-10305 Resource vrsnt_log_mon (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/02 23:59:55 VCS INFO V-16-10001-88 (duadm1) Application:smssr:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/ossrc/ssr:default] successfully.
2013/09/02 23:59:57 VCS INFO V-16-1-10305 Resource imgr_tomcat (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:00:00 VCS INFO V-16-1-10305 Resource smssr (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:00:00 VCS NOTICE V-16-1-10300 Initiating Offline of Resource supervisor (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:00:03 VCS INFO V-16-10001-88 (duadm1) Application:supervisor:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/ossrc/ssrProcessSupervisor:default] successfully.
2013/09/03 00:00:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:00:06 VCS INFO V-16-1-10305 Resource supervisor (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:00:06 VCS NOTICE V-16-1-10300 Initiating Offline of Resource tbs (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:00:11 VCS INFO V-16-2-13075 (duadm1) Resource(tomcat) has reported unexpected OFFLINE 2 times, which is still within the ToleranceLimit(2).
2013/09/03 00:01:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:01:08 VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(tomcat) because the resource became OFFLINE unexpectedly, on its own.
2013/09/03 00:01:08 VCS INFO V-16-2-13068 (duadm1) Resource(tomcat) - clean completed successfully.
2013/09/03 00:01:08 VCS ERROR V-16-2-13073 (duadm1) Resource(tomcat) became OFFLINE unexpectedly on its own. Agent is restarting (attempt number 1 of 2) the resource.
2013/09/03 00:01:08 VCS INFO V-16-10001-88 (duadm1) Application:tomcat:online:Executed [/ericsson/core/cluster/scripts/svc.sh start /ericsson/eric_3pp/tomcat:default] successfully.
2013/09/03 00:01:09 VCS INFO V-16-2-13716 (duadm1) Resource(tomcat): Output of the completed operation (online)
==============================================
svcadm: Instance "svc:/ericsson/eric_3pp/tomcat:default" is not in a maintenance or degraded state.
==============================================

2013/09/03 00:01:11 VCS NOTICE V-16-2-13076 (duadm1) Agent has successfully restarted resource(tomcat).
2013/09/03 00:01:11 VCS INFO V-16-1-55031 Resource tomcat in online state received recurring online message on system duadm1
2013/09/03 00:01:35 VCS INFO V-16-10001-88 (duadm1) Application:activemq_oss_loggingbroker:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/activemq_oss_loggingbroker:default] successfully.
2013/09/03 00:01:35 VCS INFO V-16-10001-88 (duadm1) Application:activemq:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_3pp/activemq:default] successfully.
2013/09/03 00:01:36 VCS INFO V-16-2-13716 (duadm1) Resource(activemq): Output of the completed operation (offline)
==============================================
svcadm: Instance "svc:/ericsson/eric_3pp/activemq:default" is in maintenance state.
==============================================

2013/09/03 00:01:36 VCS INFO V-16-2-13716 (duadm1) Resource(activemq_oss_loggingbroker): Output of the completed operation (offline)
==============================================
svcadm: Instance "svc:/ericsson/eric_3pp/activemq_oss_loggingbroker:default" is in maintenance state.
==============================================

2013/09/03 00:01:38 VCS INFO V-16-1-10305 Resource activemq (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:01:38 VCS INFO V-16-1-10305 Resource activemq_oss_loggingbroker (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:01:39 VCS ERROR V-16-20018-25 (duadm2) SybaseBk:masterdataservice_BACKUP:offline:Sybase Backup service masterdataservice_BACKUP cannot be stopped by isql - shutdown
2013/09/03 00:01:39 VCS ERROR V-16-20018-17 (duadm2) SybaseBk:masterdataservice_BACKUP:offline:clean scripts should do the process kill sequence
2013/09/03 00:01:39 VCS INFO V-16-2-13716 (duadm2) Resource(masterdataservice_BACKUP): Output of the completed operation (offline)
==============================================
Password:
CT-LIBRARY error:
    ct_connect(): user api layer: internal Client Library error: Read from the server has timed out.
Password:
CT-LIBRARY error:
    ct_connect(): user api layer: internal Client Library error: Read from the server has timed out.
==============================================

2013/09/03 00:01:40 VCS ERROR V-16-2-13064 (duadm2) Agent is calling clean for resource(masterdataservice_BACKUP) because the resource is up even after offline completed.
2013/09/03 00:01:40 VCS ERROR V-16-20018-20 (duadm2) SybaseBk:masterdataservice_BACKUP:clean:kill -15 of 16419
2013/09/03 00:02:01 VCS INFO V-16-2-13068 (duadm2) Resource(masterdataservice_BACKUP) - clean completed successfully.
2013/09/03 00:02:01 VCS WARNING V-16-20018-301 (duadm2) SybaseBk:masterdataservice_BACKUP:monitor:Open for backupserver failed, setting cookie to NULL
2013/09/03 00:02:01 VCS INFO V-16-1-10305 Resource masterdataservice_BACKUP (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:02:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource masterdataservice (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:02:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:02:32 VCS WARNING V-16-2-13011 (duadm1) Resource(hyperic_app): offline procedure did not complete within the expected time.
2013/09/03 00:02:32 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(hyperic_app) because offline did not complete within the expected time.
2013/09/03 00:02:32 VCS WARNING V-16-2-13011 (duadm1) Resource(ddc_app): offline procedure did not complete within the expected time.
2013/09/03 00:02:32 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(ddc_app) because offline did not complete within the expected time.
2013/09/03 00:02:32 VCS INFO V-16-2-13068 (duadm1) Resource(hyperic_app) - clean completed successfully.
2013/09/03 00:02:32 VCS INFO V-16-2-13068 (duadm1) Resource(ddc_app) - clean completed successfully.
2013/09/03 00:02:34 VCS INFO V-16-1-10305 Resource hyperic_app (Owner: Unspecified, Group: DDCMon) is offline on duadm1 (VCS initiated)
2013/09/03 00:02:34 VCS INFO V-16-1-10305 Resource ddc_app (Owner: Unspecified, Group: DDCMon) is offline on duadm1 (VCS initiated)
2013/09/03 00:02:34 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ddc_mount (Owner: Unspecified, Group: DDCMon) on System duadm1
2013/09/03 00:03:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:03:05 VCS WARNING V-16-2-13011 (duadm1) Resource(glassfish): offline procedure did not complete within the expected time.
2013/09/03 00:03:05 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(glassfish) because offline did not complete within the expected time.
2013/09/03 00:03:07 VCS INFO V-16-10001-88 (duadm1) Application:tbs:offline:Executed [/ericsson/core/cluster/scripts/svc.sh stop /ericsson/eric_ep/TBS:default] successfully.
2013/09/03 00:03:08 VCS INFO V-16-2-13716 (duadm1) Resource(tbs): Output of the completed operation (offline)
==============================================
svcadm: Instance "svc:/ericsson/eric_ep/TBS:default" is in maintenance state.
==============================================

2013/09/03 00:03:08 VCS WARNING V-16-2-13011 (duadm2) Resource(hyperic_app): offline procedure did not complete within the expected time.
2013/09/03 00:03:08 VCS WARNING V-16-2-13011 (duadm2) Resource(ddc_app): offline procedure did not complete within the expected time.
2013/09/03 00:03:08 VCS ERROR V-16-2-13063 (duadm2) Agent is calling clean for resource(hyperic_app) because offline did not complete within the expected time.
2013/09/03 00:03:08 VCS ERROR V-16-2-13063 (duadm2) Agent is calling clean for resource(ddc_app) because offline did not complete within the expected time.
2013/09/03 00:03:08 VCS INFO V-16-2-13068 (duadm2) Resource(ddc_app) - clean completed successfully.
2013/09/03 00:03:08 VCS INFO V-16-2-13068 (duadm2) Resource(hyperic_app) - clean completed successfully.
2013/09/03 00:03:10 VCS INFO V-16-1-10305 Resource tbs (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ext_notif (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ext_nsa (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource log_service (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource notif (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource nsa (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource oad (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource opendj (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource osagent (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource rmi_reg (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource rmi_reg_ext (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource sb_nsa (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource sentinel (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource time_service (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource tomcat (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:03:10 VCS INFO V-16-1-10305 Resource ddc_app (Owner: Unspecified, Group: DDCMon) is offline on duadm2 (VCS initiated)
2013/09/03 00:03:10 VCS INFO V-16-1-10305 Resource hyperic_app (Owner: Unspecified, Group: DDCMon) is offline on duadm2 (VCS initiated)
2013/09/03 00:03:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ddc_mount (Owner: Unspecified, Group: DDCMon) on System duadm2
2013/09/03 00:04:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:04:06 VCS ERROR V-16-2-13006 (duadm1) Resource(glassfish): clean procedure did not complete within the expected time.
2013/09/03 00:05:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:06:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:07:01 VCS INFO V-16-2-13003 (duadm2) Resource(masterdataservice): Output of the timed out operation (offline)
Password:
CT-LIBRARY error:
    ct_connect(): user api layer: internal Client Library error: Read from the server has timed out.

2013/09/03 00:07:01 VCS WARNING V-16-2-13011 (duadm2) Resource(masterdataservice): offline procedure did not complete within the expected time.
2013/09/03 00:07:01 VCS ERROR V-16-2-13063 (duadm2) Agent is calling clean for resource(masterdataservice) because offline did not complete within the expected time.
2013/09/03 00:07:02 VCS ERROR V-16-20018-20 (duadm2) Sybase:masterdataservice:clean:kill -15 of 16142
2013/09/03 00:07:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:07:22 VCS INFO V-16-20018-51 (duadm2) Sybase:masterdataservice:clean:Could not open directory [/var/tmp/sybase_shm/masterdataservice] for read [No such file or directory]
2013/09/03 00:07:22 VCS NOTICE V-16-20018-78 (duadm2) Sybase:masterdataservice:clean:Cannot remove Shared Memory.
2013/09/03 00:07:22 VCS INFO V-16-2-13068 (duadm2) Resource(masterdataservice) - clean completed successfully.
2013/09/03 00:07:23 VCS WARNING V-16-20018-301 (duadm2) Sybase:masterdataservice:monitor:Open for dataserver failed, setting cookie to NULL
2013/09/03 00:07:23 VCS INFO V-16-1-10305 Resource masterdataservice (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syb1bak_ip (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource dbdumps_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource fmsybdata_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource fmsyblog_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pmsybdata_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pmsyblog_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource sybdata_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource syblog_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:23 VCS NOTICE V-16-1-10300 Initiating Offline of Resource sybmaster_mount (Owner: Unspecified, Group: Sybase1) on System duadm2
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource dbdumps_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource fmsyblog_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource pmsybdata_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource pmsyblog_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource sybdata_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource fmsybdata_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource syblog_mount (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:25 VCS INFO V-16-1-10305 Resource syb1bak_ip (Owner: Unspecified, Group: Sybase1) is offline on duadm2 (VCS initiated)
2013/09/03 00:07:35 VCS WARNING V-16-2-13011 (duadm1) Resource(ddc_mount): offline procedure did not complete within the expected time.
2013/09/03 00:07:35 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(ddc_mount) because offline did not complete within the expected time.
2013/09/03 00:07:35 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:08:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(ext_nsa): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(notif): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(notif) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(ext_nsa) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(oad): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(oad) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS INFO V-16-2-13068 (duadm1) Resource(oad) - clean completed successfully.
2013/09/03 00:08:11 VCS INFO V-16-2-13068 (duadm1) Resource(notif) - clean completed successfully.
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(ext_notif): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(ext_notif) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(log_service): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(log_service) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS WARNING V-16-2-13011 (duadm1) Resource(nsa): offline procedure did not complete within the expected time.
2013/09/03 00:08:11 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(nsa) because offline did not complete within the expected time.
2013/09/03 00:08:11 VCS INFO V-16-2-13068 (duadm1) Resource(nsa) - clean completed successfully.
2013/09/03 00:08:11 VCS INFO V-16-2-13068 (duadm1) Resource(ext_nsa) - clean completed successfully.
2013/09/03 00:08:12 VCS WARNING V-16-2-13011 (duadm2) Resource(ddc_mount): offline procedure did not complete within the expected time.
2013/09/03 00:08:12 VCS ERROR V-16-2-13063 (duadm2) Agent is calling clean for resource(ddc_mount) because offline did not complete within the expected time.
2013/09/03 00:08:12 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:08:12 VCS WARNING V-16-2-13011 (duadm1) Resource(osagent): offline procedure did not complete within the expected time.
2013/09/03 00:08:12 VCS WARNING V-16-2-13011 (duadm1) Resource(opendj): offline procedure did not complete within the expected time.
2013/09/03 00:08:12 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(osagent) because offline did not complete within the expected time.
2013/09/03 00:08:12 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(opendj) because offline did not complete within the expected time.
2013/09/03 00:08:12 VCS INFO V-16-2-13068 (duadm1) Resource(osagent) - clean completed successfully.
2013/09/03 00:08:36 VCS ERROR V-16-2-13006 (duadm1) Resource(ddc_mount): clean procedure did not complete within the expected time.
2013/09/03 00:09:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:09:07 VCS WARNING V-16-2-13011 (duadm1) Resource(rmi_reg_ext): offline procedure did not complete within the expected time.
2013/09/03 00:09:07 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(rmi_reg_ext) because offline did not complete within the expected time.
2013/09/03 00:09:07 VCS INFO V-16-2-13068 (duadm1) Resource(rmi_reg_ext) - clean completed successfully.
2013/09/03 00:09:11 VCS ERROR V-16-2-13006 (duadm1) Resource(ext_notif): clean procedure did not complete within the expected time.
2013/09/03 00:09:11 VCS ERROR V-16-2-13006 (duadm1) Resource(log_service): clean procedure did not complete within the expected time.
2013/09/03 00:09:13 VCS ERROR V-16-2-13006 (duadm2) Resource(ddc_mount): clean procedure did not complete within the expected time.
2013/09/03 00:09:13 VCS ERROR V-16-2-13006 (duadm1) Resource(opendj): clean procedure did not complete within the expected time.
2013/09/03 00:09:28 VCS INFO V-16-1-10305 Resource oad (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:28 VCS NOTICE V-16-1-10300 Initiating Offline of Resource gui_nsa (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:09:28 VCS INFO V-16-1-10305 Resource nsa (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:28 VCS INFO V-16-1-10305 Resource notif (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:30 VCS INFO V-16-1-10305 Resource rmi_reg_ext (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:30 VCS INFO V-16-1-10305 Resource osagent (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:30 VCS INFO V-16-1-10305 Resource ext_nsa (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:32 VCS INFO V-16-1-10305 Resource glassfish (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:09:36 VCS ERROR V-16-2-13077 (duadm1) Agent is unable to offline resource(ddc_mount). Administrative intervention may be required.
2013/09/03 00:09:36 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:10:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:10:12 VCS ERROR V-16-2-13210 (duadm1) Agent is calling clean for resource(cluster_maint) because 4 successive invocations of the monitor procedure did not complete within the expected time.
2013/09/03 00:10:13 VCS ERROR V-16-2-13077 (duadm2) Agent is unable to offline resource(ddc_mount). Administrative intervention may be required.
2013/09/03 00:10:13 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:10:13 VCS INFO V-16-1-10305 Resource ext_notif (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:10:13 VCS INFO V-16-1-10305 Resource log_service (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:10:16 VCS INFO V-16-1-10305 Resource opendj (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:11:03 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:11:13 VCS ERROR V-16-2-13006 (duadm1) Resource(cluster_maint): clean procedure did not complete within the expected time.
2013/09/03 00:11:37 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:12:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:12:14 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:12:16 VCS INFO V-16-2-13026 (duadm1) Resource(cluster_maint) - monitor procedure finished successfully after failing to complete within the expected time for (4) consecutive times.
2013/09/03 00:12:16 VCS INFO V-16-2-13075 (duadm1) Resource(cluster_maint) has reported unexpected OFFLINE 1 times, which is still within the ToleranceLimit(2).
2013/09/03 00:12:24 VCS WARNING V-16-2-13011 (duadm2) Resource(sybmaster_mount): offline procedure did not complete within the expected time.
2013/09/03 00:12:24 VCS ERROR V-16-2-13063 (duadm2) Agent is calling clean for resource(sybmaster_mount) because offline did not complete within the expected time.
2013/09/03 00:12:24 VCS INFO V-16-10001-5530 (duadm2) Mount:sybmaster_mount:clean:Checking for loopback mount.
2013/09/03 00:13:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:13:13 VCS WARNING V-16-2-13011 (duadm1) Resource(tomcat): offline procedure did not complete within the expected time.
2013/09/03 00:13:13 VCS WARNING V-16-2-13011 (duadm1) Resource(time_service): offline procedure did not complete within the expected time.
2013/09/03 00:13:13 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(tomcat) because offline did not complete within the expected time.
2013/09/03 00:13:13 VCS WARNING V-16-2-13011 (duadm1) Resource(rmi_reg): offline procedure did not complete within the expected time.
2013/09/03 00:13:13 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(time_service) because offline did not complete within the expected time.
2013/09/03 00:13:13 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(rmi_reg) because offline did not complete within the expected time.
2013/09/03 00:13:13 VCS WARNING V-16-2-13011 (duadm1) Resource(sb_nsa): offline procedure did not complete within the expected time.
2013/09/03 00:13:13 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(sb_nsa) because offline did not complete within the expected time.
2013/09/03 00:13:13 VCS INFO V-16-2-13068 (duadm1) Resource(tomcat) - clean completed successfully.
2013/09/03 00:13:13 VCS INFO V-16-2-13068 (duadm1) Resource(rmi_reg) - clean completed successfully.
2013/09/03 00:13:14 VCS WARNING V-16-2-13011 (duadm1) Resource(sentinel): offline procedure did not complete within the expected time.
2013/09/03 00:13:14 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(sentinel) because offline did not complete within the expected time.
2013/09/03 00:13:14 VCS INFO V-16-2-13068 (duadm1) Resource(sentinel) - clean completed successfully.
2013/09/03 00:13:15 VCS INFO V-16-1-10305 Resource rmi_reg (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:13:15 VCS INFO V-16-1-10305 Resource tomcat (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:13:16 VCS INFO V-16-1-10305 Resource sentinel (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:13:16 VCS INFO V-16-2-13075 (duadm1) Resource(cluster_maint) has reported unexpected OFFLINE 2 times, which is still within the ToleranceLimit(2).
2013/09/03 00:13:25 VCS ERROR V-16-2-13006 (duadm2) Resource(sybmaster_mount): clean procedure did not complete within the expected time.
2013/09/03 00:13:38 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:14:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:14:14 VCS ERROR V-16-2-13006 (duadm1) Resource(sb_nsa): clean procedure did not complete within the expected time.
2013/09/03 00:14:14 VCS ERROR V-16-2-13006 (duadm1) Resource(time_service): clean procedure did not complete within the expected time.
2013/09/03 00:14:15 VCS ERROR V-16-2-13067 (duadm1) Agent is calling clean for resource(cluster_maint) because the resource became OFFLINE unexpectedly, on its own.
2013/09/03 00:14:15 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:14:16 VCS INFO V-16-1-10305 Resource time_service (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:14:16 VCS INFO V-16-1-10305 Resource sb_nsa (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:14:25 VCS ERROR V-16-2-13077 (duadm2) Agent is unable to offline resource(sybmaster_mount). Administrative intervention may be required.
2013/09/03 00:14:26 VCS INFO V-16-1-53504 VCS Engine Alive message!!
2013/09/03 00:14:26 VCS INFO V-16-10001-5530 (duadm2) Mount:sybmaster_mount:clean:Checking for loopback mount.
2013/09/03 00:14:39 VCS WARNING V-16-2-13011 (duadm1) Resource(gui_nsa): offline procedure did not complete within the expected time.
2013/09/03 00:14:39 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(gui_nsa) because offline did not complete within the expected time.
2013/09/03 00:14:39 VCS INFO V-16-2-13068 (duadm1) Resource(gui_nsa) - clean completed successfully.
2013/09/03 00:14:41 VCS INFO V-16-1-10305 Resource gui_nsa (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:14:41 VCS NOTICE V-16-1-10300 Initiating Offline of Resource change_versant (Owner: Unspecified, Group: Oss) on System duadm1
2013/09/03 00:15:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:15:16 VCS ERROR V-16-2-13006 (duadm1) Resource(cluster_maint): clean procedure did not complete within the expected time.
2013/09/03 00:15:40 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:16:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:16:16 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:16:27 VCS INFO V-16-10001-5530 (duadm2) Mount:sybmaster_mount:clean:Checking for loopback mount.
2013/09/03 00:17:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:17:41 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:17:58 VCS WARNING V-16-2-13011 (duadm1) Resource(stop_oss): offline procedure did not complete within the expected time.
2013/09/03 00:17:58 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(stop_oss) because offline did not complete within the expected time.
2013/09/03 00:18:04 VCS INFO V-16-1-50135 User root fired command: hagrp -switch Oss  duadm2  from localhost
2013/09/03 00:18:04 VCS INFO V-16-10001-1 (duadm1) Application:stop_oss:stop_oss.sh:Waiting for Oss to go Offline
2013/09/03 00:18:18 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:18:28 VCS INFO V-16-10001-5530 (duadm2) Mount:sybmaster_mount:clean:Checking for loopback mount.
2013/09/03 00:18:59 VCS ERROR V-16-2-13006 (duadm1) Resource(stop_oss): clean procedure did not complete within the expected time.
2013/09/03 00:19:42 VCS INFO V-16-10001-5530 (duadm1) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:19:42 VCS WARNING V-16-2-13011 (duadm1) Resource(change_versant): offline procedure did not complete within the expected time.
2013/09/03 00:19:42 VCS ERROR V-16-2-13063 (duadm1) Agent is calling clean for resource(change_versant) because offline did not complete within the expected time.
2013/09/03 00:19:42 VCS INFO V-16-2-13068 (duadm1) Resource(change_versant) - clean completed successfully.
2013/09/03 00:19:44 VCS INFO V-16-1-10305 Resource change_versant (Owner: Unspecified, Group: Oss) is offline on duadm1 (VCS initiated)
2013/09/03 00:19:44 VCS NOTICE V-16-1-10446 Group Oss is offline on system duadm1
2013/09/03 00:19:44 VCS INFO V-16-6-15025 (duadm2) hatrigger:invoking nfs_preonline
2013/09/03 00:19:44 VCS INFO V-16-6-15076 (duadm2) hatrigger:invoking regular preonline trigger if it exists
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm1) postoffline:(postoffline) Invoked with arg0=duadm1, arg1=Oss
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm1) postoffline:Executing /ericsson/core/cluster/scripts/postoffline.sh with arg0=duadm1, arg1=Oss
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm2) preonline:(preonline) Invoked with arg0=duadm2, arg1=Oss
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm2) preonline:Executing /ericsson/core/cluster/scripts/preonline.sh with arg0=duadm2, arg1=Oss
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm1) postoffline.sh:Oss:Stop old FM processes
2013/09/03 00:19:44 VCS INFO V-16-6-0 (duadm1) postoffline.sh:Oss:Stopping dummy SMF resources for Sybase
2013/09/03 00:19:52 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:00 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:01 VCS INFO V-16-1-10305 Resource stop_oss (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cluster_maint (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource smrs_nfs (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ossbak_ip (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource eba_ebsg_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource eba_ebss_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource eba_ebsw_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource eba_rede_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource eba_rtt_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource home_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource nms_cosm_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource pmstor_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource segment1_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource sgwcg_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource versant_mount (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource a3pp_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource ericsson_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource etc_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource mail_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource opt_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource upgrade_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource usr_local_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:01 VCS NOTICE V-16-1-10300 Initiating Offline of Resource var_share (Owner: Unspecified, Group: Ossfs) on System duadm1
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource a3pp_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource ericsson_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource etc_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource opt_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource usr_local_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource mail_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource var_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:02 VCS INFO V-16-1-10305 Resource upgrade_share (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:03 VCS INFO V-16-1-10305 Resource ossbak_ip (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:20:08 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:17 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:19 VCS INFO V-16-10001-5530 (duadm2) Mount:ddc_mount:clean:Checking for loopback mount.
2013/09/03 00:20:25 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:29 VCS INFO V-16-10001-5530 (duadm2) Mount:sybmaster_mount:clean:Checking for loopback mount.
2013/09/03 00:20:33 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:41 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:49 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:20:57 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource eba_ebsw_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource eba_rtt_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource eba_rede_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource pmstor_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource eba_ebss_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource nms_cosm_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource eba_ebsg_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:03 VCS INFO V-16-1-10305 Resource segment1_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:04 VCS INFO V-16-1-10305 Resource versant_mount (Owner: Unspecified, Group: Ossfs) is offline on duadm1 (VCS initiated)
2013/09/03 00:21:05 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:21:13 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:21:21 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally
2013/09/03 00:21:29 VCS INFO V-16-6-0 (duadm2) preonline.sh:Oss:Waiting for Sybase1 to come online globally

 

 

Viewing all 543 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>