Quantcast
Channel: Symantec Connect - Storage and Clustering - Discussions
Viewing all 543 articles
Browse latest View live

VCS 6.0PR1 on Solaris11 zone resource not coming up

$
0
0
I need a solution

Here is my main.cf:

 

include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster s11cluster (
        UserNames = { admin = hOPhOJoLPkPPnXPjOM,
                 administrator = aPQiPKpMQlQQoYQkPN,
                 z_zone_res_solaris11-1 = fMNfMHmJNiNNlVNhMK }
        ClusterAddress = "192.168.0.40"
        Administrators = { admin }
        )

system solaris11-1 (
        )

system solaris11-2 (
        )

group ClusterService (
        SystemList = { solaris11-1 = 0, solaris11-2 = 1 }
        AutoStartList = { solaris11-1, solaris11-2 }
        OnlineRetryLimit = 3
        OnlineRetryInterval = 120
        )

        IP webip (
                Device = ipmp0
                Address = "192.168.0.40"
                NetMask = "255.255.255.0"
                )

        NIC csgnic (
                Device = ipmp0
                )

        webip requires csgnic

        // resource dependency tree
        //

        //      group ClusterService
        //      {
        //      IP webip
        //          {
        //          NIC csgnic
        //          }
        //      }

group zpoolgrp (
        SystemList = { solaris11-1 = 0, solaris11-2 = 1 }
        ContainerInfo @solaris11-1 = { Name = z1, Type = Zone, Enabled = 1 }
        ContainerInfo @solaris11-2 = { Name = z1, Type = Zone, Enabled = 1 }
        AutoStartList = { solaris11-1, solaris11-2 }
        Administrators = { z_zone_res_solaris11-1 }
        )

        Zone zone_res (
                )

        Zpool zpool_oradata (
                PoolName = oradata
                )

        Zpool zpool_orahome (
                PoolName = orahome
                )

        Zpool zpool_zoneroot (
                PoolName = zoneroot
                )

        zone_res requires zpool_oradata
        zone_res requires zpool_orahome
        zone_res requires zpool_zoneroot
        zpool_oradata requires zpool_zoneroot
        zpool_orahome requires zpool_zoneroot

        // resource dependency tree
        //
        //      group zpoolgrp
--More--(85%)

       )

        Zone zone_res (
                )

        Zpool zpool_oradata (
                PoolName = oradata
                )

        Zpool zpool_orahome (
                PoolName = orahome
                )

        Zpool zpool_zoneroot (
                PoolName = zoneroot
                )

        zone_res requires zpool_oradata
        zone_res requires zpool_orahome
        zone_res requires zpool_zoneroot
        zpool_oradata requires zpool_zoneroot
        zpool_orahome requires zpool_zoneroot

        // resource dependency tree
        //
        //      group zpoolgrp
        //      {
        //      Zone zone_res
        //          {
        //          Zpool zpool_zoneroot
        //          Zpool zpool_oradata
        //              {
        //              Zpool zpool_zoneroot
        //              }
        //          Zpool zpool_orahome
        //              {
        //              Zpool zpool_zoneroot
        //              }
        //          }
        //      }
 


need a solution

$
0
0
I need a solution

 

 

we have 2 node cluster and with version 5.1

we experienced outage and  I think it was due to below error messages

can someone shed some light on these messages

 qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop OFFLINE
qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop ONLINE
 fctl: [ID 999315 kern.warning] WARNING: fctl(4): AL_PA=0xe8 doesn't exist in LILP map

 scsi: [ID 107833 kern.warning] WARNING: /pci@0,600000/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w203400a0b875f9d9,0 (ssd3):
    Command failed to complete...Device is gone
 scsi: [ID 107833 kern.warning] WARNING: /pci@0,600000/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w203400a0b875f9d9,0 (ssd3):
    Command failed to complete...Device is gone
 scsi: [ID 107833 kern.warning] WARNING: /pci@0,600000/pci@0/pci@9/SUNW,qlc@0/fp@0,0/ssd@w203400a0b875f9d9,0 (ssd3):
    Command failed to complete...Device is gone
 scsi: [ID 243001 kern.info] /pci@0,600000/pci@0/pci@9/SUNW,qlc@0/fp@0,0 (fcp4):
    offlining lun=0 (trace=0), target=e8 (trace=2800004)
 vxdmp: [ID 631182 kern.notice] NOTICE: VxVM vxdmp V-5-0-0 removed disk array 600A0B800075F9D9000000004D2334F5, datype = ST2540-
vxdmp: [ID 443116 kern.notice] NOTICE: VxVM vxdmp V-5-0-0 i/o error occured (errno=0x6) on dmpnode 334/0x2c
 last message repeated 59 times
 vxdmp: [ID 480808 kern.notice] NOTICE: VxVM vxdmp V-5-0-112 disabled path 118/0x18 belonging to the dmpnode 334/0x28 due to open failure
 vxdmp: [ID 824220 kern.notice] NOTICE: VxVM vxdmp V-5-0-111 disabled dmpnode 334/0x28

 

what is this dmpnode 334/0x28 signify, I forget how to map this to device as i only remember is tht its in hexadecimal.

 

Also, what could be the cause of it ...

 

is it due to HBA as issue starts with the  message like below

 

 qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop OFFLINE
qlc: [ID 630585 kern.info] NOTICE: Qlogic qlc(1): Loop ONLINE
 fctl: [ID 999315 kern.warning] WARNING: fctl(4): AL_PA=0xe8 doesn't exist in LILP map

 

Need information on how to configure .vmdks with VCS to allow for vMotion

$
0
0
I need a solution

 

I was wondering if anyone in this forum has used virtual disks (.vmdk format)  VMware 5.1 and SFWHA 6.0.1 while still taking advantage of DRS and vMotion?

What a co-worker and I are trying to accomplish is to use vmdk disks instead of RDM in a configuration that allow DRS and vMotion capability.

We can failover the application group from virtual system to virtual system without issue but when the guest system with the .vmdk gets vMotion'ed, then the other guest systems in the cluster cannot 'find' the disks/diskgroups for that particular application (in this example, it would be Exchange 2010). 

I have been looking through the SFWHA documentation but not finding anything directly related on how the environment needs to be setup in order for this to work. If anyone has the info in a document, technote, or SFWHA manual that we missed, it would be greatly appreciated to please reference the information.

Thanks,

Chip

Recovering a lost NORAID Logical Drive

$
0
0
I need a solution

Hello Everybody

I have a Veritas Cluster environment with two Nodes that can access two SUN Storedges 3320.

It was create 2 logical drives at the both Storedges, with NORAID.

One of this Logical Drives lost a drive and because the NORAID option the data was lost at all.

Because we have the Veritas Volume Manager e Veritas Clustes, the DG´s were detach and attach to the other Storedge.

A new disk was add in the bad Logical Drive and the Luns were recreated.

Now I have to recreate the disks in the Veritas Volume Manager. Can anyone help me?

Take a look at this commands:

root@svmmprod1 # vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0t0d0s2     auto:cdsdisk    -            -            online
c0t0d1s2     auto:cdsdisk    bgw1dgc0t0d1  bgw1dg       online
c0t0d2s2     auto:cdsdisk    -            -            online
c0t0d3s2     auto:cdsdisk    -            -            online
c0t0d4s2     auto:cdsdisk    -            -            online
c2t0d0s2     auto:sliced     rootdisk     rootdg       online
c2t1d0s2     auto:sliced     rootmirror   rootdg       online
c2t2d0s2     auto:cdsdisk    -            -            online
c2t3d0s2     auto:cdsdisk    -            -            online
c5t2d0s2     auto            -            -            error
c5t2d1s2     auto            -            -            error
c5t2d2s2     auto            -            -            error
c5t2d3s2     auto:cdsdisk    -            -            online
c5t2d4s2     auto:cdsdisk    -            -            online
-            -         bgw1dgc5t2d1 bgw1dg       failed was:c5t2d1s2

##############################################################################

root@svmmprod2 # vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0t2d0s2     auto:cdsdisk    ora1dgc0t0d0  ora1dg       online
c0t2d1s2     auto:cdsdisk    -            -            online
c0t2d2s2     auto:cdsdisk    bgw2dgc0t2d2  bgw2dg       online
c0t2d3s2     auto:cdsdisk    -            -            online
c0t2d4s2     auto:cdsdisk    -            -            online
c2t0d0s2     auto:sliced     rootdisk     rootdg       online
c2t1d0s2     auto:sliced     rootmirror   rootdg       online
c2t2d0s2     auto:cdsdisk    oraarch1dg02  oraarch1dg   online
c2t3d0s2     auto:cdsdisk    oradata1dg02  oradata1dg   online
c4t0d0s2     auto            -            -            error
c4t0d1s2     auto            -            -            error
c4t0d2s2     auto            -            -            error
c4t0d3s2     auto:cdsdisk    oradata1dg01  oradata1dg   online
c4t0d4s2     auto:cdsdisk    oraarch1dg01  oraarch1dg   online
-            -         ora1dg01     ora1dg       failed was:c4t0d0s2
-            -         bgw2dgc4t0d2 bgw2dg       failed was:c4t0d2s2
 

 

Thanks for now

Alexandre Andrich

 

 

 

 

 

 

SF5.1 SP1 RP3 with oracle 10gR2

VCS failover service group

$
0
0
I need a solution

Hi Guys,

 

I have some inquiry regarding VCS failover service group.

This example is for 3 node cluster:

 

There are two service groups in <clustername> cluster, app1_SG running on node_1 and app2_SG running on node_2. In any failure, these two service groups will be switching to node_3. However, both service groups could not be running on the same node, i.e. while app1_SG switched to node_3, then app2_SG could not switch to node_3 and vice versa.

Is there a configuration on how to configure this setup?

Any comments will appreciate.

 

Thanks.

8702651
1367862380
30201

trying to configure I/O fencing

$
0
0
No
I need a solution

I need help with this trying to configure I/O fencing but I get this error

Would you like to configure I/O fencing on the cluster? [y,n,q] y

    Checking communication on system1 .............................................................................................................. Done
    Checking release compatibility on system1 ...................................................................................................... Done
    Checking VCS installation on system1 ............................................................................................ Version 6.0.100.000
    Checking communication on system2 ............................................................................................................ Failed
CPI ERROR V-9-20-1262 Cannot resolve hostname system2

8712121
8706111
1367981432
2883111

how to add LUNs in cluster service groups

$
0
0
I need a solution

Hi

 

I installed veritas on redhat 6.3 and from vom i added the LUNs and i created volumes for disks everything working good .

Now i created Cluster groups (service groups)  so how now resources (LUNs) should be added to each Cluster Group.

 

please help me

 

Thanks


Vxdiskadm Replace disk does not show my failed disk

$
0
0
I need a solution

Hello all,

 

i would like to replace my failed disk via the vxdiskadm utility

 

EMMDPD04:/# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
c0t0d0s2     auto:sliced     rootdisk     rootdg       online
c0t1d0s2     auto:sliced     rootmirror   rootdg       online
c0t2d0s2     auto            -            -            error
c0t3d0s2     auto:none       -            -            online invalid
emcpower0s2  auto:sliced     bgw1dgemcpower0  bgw1dg       online shared
emcpower1s2  auto:sliced     ora1dgemcpower7  ora1dg       online shared
emcpower2s2  auto:sliced     fmm1dgemcpower6  fmm1dg       online shared
emcpower3s2  auto:sliced     bgw1dgemcpower1  bgw1dg       online shared
emcpower4s2  auto:cdsdisk    -            -            online
emcpower5s2  auto:cdsdisk    -            -            online
emcpower6s2  auto:sliced     lic1dgemcpower5  lic1dg       online shared
emcpower7s2  auto:cdsdisk    -            -            online
emcpower8s2  auto:sliced     bgw1dgemcpower8  bgw1dg       online shared
emcpower9s2  auto:sliced     bgw1dgemcpower9  bgw1dg       online shared
emcpower10s2 auto:sliced     bgw1dgemcpower10  bgw1dg       online shared
emcpower11s2 auto:sliced     bgw1dgemcpower11  bgw1dg       online shared
emcpower12s2 auto:sliced     bgw1dgemcpower12  bgw1dg       online shared
emcpower13s2 auto:sliced     bgw1dgemcpower13  bgw1dg       online shared
emcpower14s2 auto:sliced     bgw1dgemcpower14  bgw1dg       online shared
emcpower15s2 auto:sliced     bgw1dgemcpower15  bgw1dg       online shared
EMMDPD04:/# 
EMMDPD04:/# 
EMMDPD04:/# 
EMMDPD04:/# vxdiskadm
 
Volume Manager Support Operations
Menu: VolumeManager/Disk
 
 1      Add or initialize one or more disks
 2      Encapsulate one or more disks
 3      Remove a disk
 4      Remove a disk for replacement
 5      Replace a failed or removed disk
 6      Mirror volumes on a disk
 7      Move volumes from a disk
 8      Enable access to (import) a disk group
 9      Remove access to (deport) a disk group
 10     Enable (online) a disk device
 11     Disable (offline) a disk device
 12     Mark a disk as a spare for a disk group
 13     Turn off the spare flag on a disk
 14     Unrelocate subdisks back to a disk
 15     Exclude a disk from hot-relocation use
 16     Make a disk available for hot-relocation use
 17     Prevent multipathing/Suppress devices from VxVM's view
 18     Allow multipathing/Unsuppress devices from VxVM's view
 19     List currently suppressed/non-multipathed devices
 20     Change the disk naming scheme
 21     Get the newly connected/zoned disks in VxVM view
 22     Change/Display the default disk layouts
 23     Mark a disk as allocator-reserved for a disk group
 24     Turn off the allocator-reserved flag on a disk
 list   List disk information
 
 
 ?      Display help about menu
 ??     Display help about the menuing system
 q      Exit from menus
 
Select an operation to perform: 4
 
Remove a disk for replacement
Menu: VolumeManager/Disk/RemoveForReplace
  Use this menu operation to remove a physical disk from a disk
  group, while retaining the disk name.  This changes the state
  for the disk name to a "removed" disk.  If there are any
  initialized disks that are not part of a disk group, you will be
  given the option of using one of these disks as a replacement.
 
Enter disk name [<disk>,list,q,?] list
 
Disk group: rootdg
 
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
 
dm rootdisk     c0t0d0s2     auto     80321    583834230 -
dm rootmirror   c0t1d0s2     auto     80321    583850295 -
 
Disk group: bgw1dg
 
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
 
dm bgw1dgemcpower0 emcpower0s2 auto   96130    1312574752 -
dm bgw1dgemcpower1 emcpower3s2 auto   96130    1310486288 -
dm bgw1dgemcpower8 emcpower8s2 auto   96130    1576844000 -
dm bgw1dgemcpower9 emcpower9s2 auto   96130    1576844000 -
dm bgw1dgemcpower10 emcpower10s2 auto 96130    1576844000 -
dm bgw1dgemcpower11 emcpower11s2 auto 96130    1576844000 -
dm bgw1dgemcpower12 emcpower12s2 auto 96130    1314695328 -
dm bgw1dgemcpower13 emcpower13s2 auto 96130    1293746560 -
dm bgw1dgemcpower14 emcpower14s2 auto 96130    1576844000 -
dm bgw1dgemcpower15 emcpower15s2 auto 96130    1576844000 -
 
Disk group: fmm1dg
 
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
 
dm fmm1dgemcpower6 emcpower2s2 auto   80065    104711648 -
 
Disk group: lic1dg
 
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
 
dm lic1dgemcpower5 emcpower6s2 auto   67324    2019328  -
 
Disk group: ora1dg
 
DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE
 
dm ora1dgemcpower7 emcpower1s2 auto   80065    419280416 -
 
Enter disk name [<disk>,list,q,?] c0t2d0s2                
  VxVM  ERROR V-5-2-400
There is no disk named c0t2d0s2 in any disk group configuration.
  To get a list of disks enter "list".
 
Enter disk name [<disk>,list,q,?] q
 
 
Is this the proper way or is there any other way to replace my spare disk?
 
 

VCS6.0PR1 Oracle resource not starting

$
0
0
I need a solution

Hi, I am getting the message:

 

==============================================
/bin/sh[1]: cd: /opt/VRTSagents/ha/bin/Oracle: [No such file or directory]
/bin/sh: line 1: /opt/VRTSagents/ha/bin/Oracle/online: not found
 

==============================================
/bin/sh[1]: cd: /opt/VRTSagents/ha/bin/Oracle: [No such file or directory]
/bin/sh: line 1: /opt/VRTSagents/ha/bin/Oracle/clean: not found
==============================================
 

Clustering EV with SFW-HA (GCO)?

$
0
0
I need a solution

Hi.

I'm working on DR of EV with SFW-HA (GCO). Primary site is setup according to docs but i'm failing when working on the Secondary site. As per doc, i have to offline EV service group from Primary and import/mount the replicated dg on Secondary. This is done however, when I'm running the EV Configuration Wizard to setup EV on Secondary site, it's looking for the EV environment on Primary which is obviously offline.

Anything I missed?

Any idea is most welcome.

Custom format of logs in engine_A.log

$
0
0
I need a solution

Hey

Do you happen to know if there is a way to tweak VCS in a way that it carries additional information in every log entry? Preferebly, I would like for every entry in the log to display cluster name and id, before or after the error code. Why? I need that for Tivoli monitoring to be able to identify same events coming from the same cluster and as a result creating one alert and not as many as the number of cluster nodes. Only by adding those two I would feel comfortable enabling deduplication on the tivoli end. Together with event summary they would create a unique key. If there is no way to do so I cannot enable it due to the size of the environment that can have similar res/group names sharing certain infrastracture componenta such as SAN or network of which issue can result in failures on a number of clusters at the very same time.

Thanks
Wojtek

0

Problem with Veritas Cluster

$
0
0
I need a solution

Hi

I installed   Veritas on redhat 6.3 on 2 hosts and i created LUNs and is working good but We faced a problem in  electricity and the power for the servers down and after fixed the propblem and started the servers the server 1 is working good but the server 2 gives me this message

 

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 4 return x2007

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 3 return x2002

 

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 2 return x2002

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 1 return x2002

 

Please help

Thank you very much

Problem with Veritas Cluster

$
0
0
I need a solution

Hi

I installed   Veritas on redhat 6.3 on 2 hosts and i created LUNs and is working good but We faced a problem in  electricity and the power for the servers down and after fixed the propblem and started the servers the server 1 is working good but the server 2 gives me this message

 

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 4 return x2007

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 3 return x2002

 

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 2 return x2002

Ipfc 0000:02:00:2 SCSI layer issued device reset 0, 1 return x2002

 

Please help

Thank you very much

VCS 6.0.1 Solairs 11 Oracle 11Rg2 everything works fine now, except for....

$
0
0
I need a solution

getting this error message while offlining the oracle SG.

 

 

2013/05/15 10:43:31 VCS ERROR V-16-10001-14058 (solaris11-chi-2) Zone:z3_zone_res:offline:Command [/usr/sbin/zoneadm -z "z3" detach 2>&1] failed with output [umount: warning: /z3root/root not in mnttab
 umount: /z3root/root not mounted
 ERROR: unable to unmount /z3root/root.
 ERROR: Unable to unmount boot environment.
]
 
 
I am able to bring up the oracle resource on one of the nodes in a HA zone....
 
Please see main.cf attached....

problems with llt manual installation

$
0
0
I need a solution

 

root@solaris11-chi-1:/etc/VRTSvcs/conf/config# cat /etc/llttab
set-node solaris11-chi-1
set-cluster 7777
link net2 /dev/net/net2:0 - ether - -
link net3 /dev/net/net3:1 - ether - -
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#
 
 
root@solaris11-chi-1:/etc/VRTSvcs/conf/config# cat /etc/llthosts
0 solaris11-chi-2
1 solaris11-chi-1
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#
 
 
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#  /lib/svc/method/llt start
Starting LLT...
LLT lltconfig ERROR V-14-2-15040 node ID is already set, use -o to override
Could not start LLT successfully.
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#
 

 

GAB /sbin/gabconfig ERROR V-15-2-25022 unknown error

$
0
0
I need a solution

 

root@solaris11-chi-1:/etc/VRTSvcs/conf/config# more main.cf
include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"
 
cluster s11cluster (
        UserNames = { admin = dKJgKMjFKrKPjFLfHFgEHl }
        Administrators = { admin }
        )
 
system solaris11-chi-2 (
        )
system solaris11-chi-1 (
        )
 
 
root@solaris11-chi-1:~# gabconfig -c -n2
GAB gabconfig ERROR V-15-2-25022 unknown error
root@solaris11-chi-1:~#
 
 

 

Can we make a cluster filesystem available in both nodes at a time

$
0
0
I need a solution

Hello,

We have a Veritas cluster file system in 2 nodes. Now application team has requested to mount the cluster file system in both nodes at same time.

Current status of filesystem  is Active/Passive and customer request is to make that file system as Active/Active.

Is it possible to make Active/Active. If yes, please let me know the procoess...

 

Regards,

Chaitanya Bezawada. 

Resource became OFFLINE unexpectedly on its own

$
0
0
I need a solution

Hi Experts,

 

I would like to know if there is any other way to check what is causing some resources to go down unexpectedly?

 

 

May 15 14:54:46 EMMDPD07 AgentFramework[7135]: [ID 702911 daemon.notice] VCS ERROR V-16-1-13067 Thread(9) Agent is calling clean for resource(Server1) because the resource became OFFLINE unexpectedly, on its own.
May 15 14:54:47 EMMDPD07 Had[6891]: [ID 702911 daemon.notice] VCS ERROR V-16-1-13067 (EMMDPD07) Agent is calling clean for resource(Server1) because the resource became OFFLINE unexpectedly, on its own.
May 15 14:54:48 EMMDPD07 AgentFramework[7135]: [ID 702911 daemon.notice] VCS ERROR V-16-1-13068 Thread(9) Resource(Server1) - clean completed successfully.
May 15 14:54:48 EMMDPD07 AgentFramework[7135]: [ID 702911 daemon.notice] VCS ERROR V-16-1-13073 Thread(9) Resource(Server1) became OFFLINE unexpectedly on its own. Agent is restarting (attempt number 1 of 1) the resource.
May 15 14:54:48 EMMDPD07 Had[6891]: [ID 702911 daemon.notice] VCS ERROR V-16-1-13073 (EMMDPD07) Resource(Server1) became OFFLINE unexpectedly on its own. Agent is restarting (attempt number 1 of 1) the resource.
 
 
Any comment or suggestion is appreciated :)
 

Does VCS low pri LLT support configure on a public bond NIC with public IP ?

$
0
0
I need a solution

I have a problem that to add a low pri LLT with a public bond NIC with public IP on it.

bond0 is a bond of 2 eth NICs and each NIC connecting to a different router, and there is connection between 2 routers. The 2 IPs are in the same subnet, so there is no route issue.

After configure the bond0 as low pri LLT, lltstat shows the link DOWN on the other node from each side:

 

# lltstat -nvv|head
LLT node information:
Node State Link Status Address
0 server1 OPEN
eth1 UP E4:1F:13:2D:92:3P
eth3 UP 00:10:18:8C:3U:4D
bond0 DOWN
* 1 server2 OPEN
eth1 UP E4:1F:13:62:39:49
eth3 UP 00:10:18:8C:0B:93
bond0 UP E4:1F:13:62:94:05

[root@cdceap1d ~]# lltstat -nvv|head
LLT node information:
Node State Link Status Address
* 0 server1 OPEN
eth1 UP E4:1F:13:2D:92:3P
eth3 UP 00:10:18:8C:3U:4D
bond0 UP E4:1F:13:2D:0E:6D
1 server2 OPEN
eth1 UP E4:1F:13:62:39:49
eth3 UP 00:10:18:8C:0B:93
bond0 DOWN

 

LLT work on 2nd layer "Data Link Layer",  /opt/VRTSllt/lltping test cannot get through to each other. But arping with broadcast package can get through.

 

So my question is if the low pri LLT support public bond NIC with public IP ?

I found some notes :

LLT supports NIC bonding
You can configure NIC bonds (aggregated interfaces) as private links under LLT.
LLT treats each aggregated interface as a single link. So, you must configure these NICs that form the bond in such a way that the NICs are connected to the same switch or hub.
Note: If the NICs are connected to different switches or hubs, you must establish connection between the switches or hubs.

 

So according to the note, LLT only support NIC bonding as private links? Not with public IP ?

 

Any one has the experience would be huge appreciate.

Viewing all 543 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>