Wednesday, April 23, 2014

Configure 2-node Veritas Cluster Server on Solaris 10 (no I/O fencing)

# /opt/VRTS/install/installsfha601 -configure

    Veritas Storage Foundation and High Availability 6.0.1 Configure Program

Copyright (c) 2012 Symantec Corporation. All rights reserved.  Symantec, the
Symantec Logo are trademarks or registered trademarks of Symantec Corporation or
its affiliates in the U.S. and other countries. Other names may be trademarks of
their respective owners.

The Licensed Software and Documentation are deemed to be "commercial computer
software" and "commercial computer software documentation" as defined in FAR
Sections 12.212 and DFARS Section 227.7202.

Logs are being written to /var/tmp/installsfha601-201404231512rwa while
installsfha601 is in progress.

Enter the Solaris x64 system names separated by spaces: [q,?] sol2 sol3

    Veritas Storage Foundation and High Availability 6.0.1 Configure Program
                                    sol2 sol3

Logs are being written to /var/tmp/installsfha601-201404231512rwa while
installsfha601 is in progress

    Verifying systems: 100%

    Estimated time remaining: (mm:ss) 0:00                            5 of 5

    Checking system communication ..................................... Done
    Checking release compatibility .................................... Done
    Checking installed product ........................................ Done
    Checking platform version ......................................... Done
    Performing product prechecks ...................................... Done

System verification checks completed successfully

I/O Fencing

It needs to be determined at this time if you plan to configure I/O Fencing in
enabled or disabled mode, as well as help in determining the number of network
interconnects (NICS) required on your systems. If you configure I/O Fencing in
enabled mode, only a single NIC is required, though at least two are
recommended.

A split brain can occur if servers within the cluster become unable to
communicate for any number of reasons. If I/O Fencing is not enabled, you run
the risk of data corruption should a split brain occur. Therefore, to avoid data
corruption due to split brain in CFS environments, I/O Fencing has to be
enabled.

If you do not enable I/O Fencing, you do so at your own risk

See the Administrator's Guide for more information on I/O Fencing

Do you want to configure I/O Fencing in enabled mode? [y,n,q,?] (y)

To configure VCS, answer the set of questions on the next screen.

When [b] is presented after a question, 'b' may be entered to go back to the
first question of the configuration set.

When [?] is presented after a question, '?' may be entered for help or
additional information about the question.

Following each set of questions, the information you have entered will be
presented for confirmation.  To repeat the set of questions and correct any
previous errors, enter 'n' at the confirmation prompt.

No configuration changes are made to the systems until all configuration
questions are completed and confirmed.

Press [Enter] to continue:

To configure VCS for SFHA the following information is required:

        A unique cluster name
        One or more NICs per system used for heartbeat links
        A unique cluster ID number between 0-65535

        One or more heartbeat links are configured as private links
        You can configure one heartbeat link as a low-priority link

All systems are being configured to create one cluster.

Enter the unique cluster name: [q,?] sol


     1)  Configure heartbeat links using LLT over Ethernet
     2)  Configure heartbeat links using LLT over UDP
     3)  Automatically detect configuration for LLT over Ethernet
     b)  Back to previous menu

How would you like to configure heartbeat links? [1-3,b,q,?] (1)


    Discovering NICs on sol2 .................... Discovered e1000g0 e1000g1

Enter the NIC for the first private heartbeat link on sol2: [b,q,?] (e1000g1)
Would you like to configure a second private heartbeat link? [y,n,q,b,?] (n)

Do you want to configure an additional low-priority heartbeat link? [y,n,q,b,?]
(n) y

Enter the NIC for the low-priority heartbeat link on sol2: [b,q,?] (e1000g0)

Are you using the same NICs for private heartbeat links on all systems?
[y,n,q,b,?] (y)

    Checking media speed for e1000g1 on sol2 ..................... 1000 Mbps
    Checking media speed for e1000g1 on sol3 ..................... 1000 Mbps

Enter a unique cluster ID number between 0-65535: [b,q,?] (27375)

The cluster cannot be configured if the cluster ID 27375 is in use by another
cluster. Installer can perform a check to determine if the cluster ID is
duplicate. The check will take less than a minute to complete.

Would you like to check if the cluster ID is in use by another cluster? [y,n,q]
(y)


    Checking cluster ID ............................................... Done

Duplicated cluster ID detection passed. The cluster ID 27375 can be used for the
cluster.

Press [Enter] to continue:


Cluster information verification:

        Cluster Name:      sol
        Cluster ID Number: 27375
        Private Heartbeat NICs for sol2:
                link1=e1000g1
        Low-Priority Heartbeat NIC for sol2:
                link-lowpri1=e1000g0
        Private Heartbeat NICs for sol3:
                link1=e1000g1
        Low-Priority Heartbeat NIC for sol3:
                link-lowpri1=e1000g0

Is this information correct? [y,n,q,?] (y)


The following data is required to configure the Virtual IP of the Cluster:

        A public NIC used by each system in the cluster
        A Virtual IP address and netmask

Do you want to configure the Virtual IP? [y,n,q,?] (n) y


Active NIC devices discovered on sol2: e1000g0

Enter the NIC for Virtual IP of the Cluster to use on sol2: [b,q,?] (e1000g0)

Is e1000g0 to be the public NIC used by all systems? [y,n,q,b,?] (y)

Enter the Virtual IP address for the Cluster: [b,q,?] 192.168.1.189
Enter the NetMask for IP 192.168.1.189: [b,q,?] (255.255.255.0)

Cluster Virtual IP verification:

        NIC: e1000g0
        IP: 192.168.1.189
        NetMask: 255.255.255.0

Is this information correct? [y,n,q] (y)


Veritas Cluster Server can be configured in secure mode

Running VCS in Secure Mode guarantees that all inter-system communication is
encrypted, and users are verified with security credentials.

When running VCS in Secure Mode, NIS and system usernames and passwords are used
to verify identity. VCS usernames and passwords are no longer utilized when a
cluster is running in Secure Mode.

Would you like to configure the VCS cluster in secure mode? [y,n,q,?] (n)

The following information is required to add VCS users:

        A user name
        A password for the user
        User privileges (Administrator, Operator, or Guest)

Do you wish to accept the default cluster credentials of 'admin/password'?
[y,n,q] (y)


Do you want to add another user to the cluster? [y,n,q] (n)

VCS User verification:

        User: admin     Privilege: Administrators

        Passwords are not displayed

Is this information correct? [y,n,q] (y)

The following information is required to configure SMTP notification:

        The domain-based hostname of the SMTP server
        The email address of each SMTP recipient
        A minimum severity level of messages to send to each recipient

Do you want to configure SMTP notification? [y,n,q,?] (n)

The following information is required to configure SNMP notification:

        System names of SNMP consoles to receive VCS trap messages
        SNMP trap daemon port numbers for each console
        A minimum severity level of messages to send to each console

Do you want to configure SNMP notification? [y,n,q,?] (n)

All SFHA processes that are currently running must be stopped

Do you want to stop SFHA processes now? [y,n,q,?] (y)

Logs are being written to /var/tmp/installsfha601-201404231512rwa while
installsfha601 is in progress

    Starting SFHA: 100%

    Estimated time remaining: (mm:ss) 0:00                          18 of 18

    Starting vxesd .................................................... Done
    Starting vxrelocd ................................................. Done
    Starting vxconfigbackupd .......................................... Done
    Starting vxportal ................................................. Done
    Starting fdd ...................................................... Done
    Starting llt ...................................................... Done
    Starting gab ...................................................... Done
    Starting amf ...................................................... Done
    Starting had ...................................................... Done
    Starting CmdServer ................................................ Done
    Starting vxdbd .................................................... Done
    Starting odm ...................................................... Done
    Performing SFHA poststart tasks ................................... Done

Veritas Storage Foundation and High Availability Startup completed successfully



    Veritas Storage Foundation and High Availability 6.0.1 Configure Program
                                    sol2 sol3

Fencing configuration
     1)  Configure Coordination Point client based fencing
     2)  Configure disk based fencing

Select the fencing mechanism to be configured in this Application Cluster:
[1-2,q] 2

This fencing configuration option requires a restart of VCS. Installer will stop
VCS at a later stage in this run. Do you want to continue? [y,n,q,b,?] y

Do you have SCSI3 PR enabled disks? [y,n,q,b] (y) n

Since you don't have SCSI3 PR enabled disks, you cannot configure disk based
fencing but you can use Coordination Point client based fencing

Do you want to retry fencing configuration? [y,n,q,b] (n)

The updates to VRTSaslapm package are released via the Symantec SORT web page:
https://sort.symantec.com/asl. To make sure you have the latest version of
VRTSaslapm (for up to date ASLs and APMs), download and install the latest
package from the SORT web page.

installsfha601 log files, summary file, and response file are saved at:

        /opt/VRTS/install/logs/installsfha601-201404231512rwa

Would you like to view the summary file? [y,n,q] (n) n

# /opt/VRTSvcs/bin/hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  sol2                 RUNNING              0
A  sol3                 RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State        

B  ClusterService  sol2                 Y          N               ONLINE       
B  ClusterService  sol3                 Y          N               OFFLINE      

# cat /etc/llthosts
0 sol2
1 sol3
# cat /etc/llttab
set-node sol2
set-cluster 27375
link e1000g1 /dev/e1000g:1 - ether - -
link-lowpri e1000g0 /dev/e1000g:0 - ether - -

# cat /etc/gabtab
/sbin/gabconfig -c -n2


# cat /etc/VRTSvcs/conf/config/main.cf
include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster sol (
        UserNames = { admin = anoGniNkoJooMwoInl }
        ClusterAddress = "192.168.1.189"
        Administrators = { admin }
        )

system sol2 (
        )

system sol3 (
        )

group ClusterService (
        SystemList = { sol2 = 0, sol3 = 1 }
        AutoStartList = { sol2, sol3 }
        OnlineRetryLimit = 3
        OnlineRetryInterval = 120
        )

        IP webip (
                Device = e1000g0
                Address = "192.168.1.189"
                NetMask = "255.255.255.0"
                )

        NIC csgnic (
                Device = e1000g0
                )

        webip requires csgnic


        // resource dependency tree
        //
        //      group ClusterService
        //      {
        //      IP webip
        //          {
        //          NIC csgnic
        //          }
        //      }

# ping 192.168.1.189 (virtual IP)
192.168.1.189 is alive

# netstat -anp
Net to Media Table: IPv4
Device   IP Address               Mask      Flags      Phys Addr
------ -------------------- --------------- -------- ---------------
e1000g0 192.168.1.190        255.255.255.255 o        08:00:27:66:02:a4
e1000g0 192.168.1.102        255.255.255.255          7c:e9:d3:86:3e:12
e1000g0 192.168.1.104        255.255.255.255 o        7c:e9:d3:f7:ac:9b
e1000g0 192.168.1.189        255.255.255.255 SPLA     08:00:27:9a:15:5a
e1000g0 192.168.1.191        255.255.255.255 SPLA     08:00:27:9a:15:5a
e1000g0 224.0.0.0            240.0.0.0       SM       01:00:5e:00:00:00




Install Veritas Storage Foundation and High Availability (SFHA) on Solaris 10

Veritas Storage Foundation and High Availability (SFHA) is VxVM and VCS

# uname -a
SunOS sol2 5.10 Generic_147148-26 i86pc i386 i86pc

# cd dvd2-sol_x64/sol10_x64
# ./installer

    Storage Foundation and High Availability Solutions 6.0.1 Install Program

Symantec Product                                   Version Installed    Licensed
================================================================================
Symantec Licensing Utilities (VRTSvlic) are not installed due to which products
and licenses are not discovered.
Use the menu below to continue.


Task Menu:

    P) Perform a Pre-Installation Check       I) Install a Product
    C) Configure an Installed Product         G) Upgrade a Product
    O) Perform a Post-Installation Check      U) Uninstall a Product
    L) License a Product                      S) Start a Product
    D) View Product Descriptions              X) Stop a Product
    R) View Product Requirements              ?) Help

Enter a Task: [P,I,C,G,O,U,L,S,D,X,R,?] I

    Storage Foundation and High Availability Solutions 6.0.1 Install Program

     1)  Veritas Dynamic Multi-Pathing (DMP)
     2)  Veritas Cluster Server (VCS)
     3)  Veritas Storage Foundation (SF)
     4)  Veritas Storage Foundation and High Availability (SFHA)
     5)  Veritas Storage Foundation Cluster File System HA (SFCFSHA)
     6)  Symantec VirtualStore (SVS)
     7)  Veritas Storage Foundation for Oracle RAC (SF Oracle RAC)
     b)  Back to previous menu

Select a product to install: [1-7,b,q] 4


Do you agree with the terms of the End User License Agreement as specified in
the storage_foundation_high_availability/EULA/en/EULA_SFHA_Ux_6.0.1.pdf file
present on media? [y,n,q,?] y


     1)  Install minimal required packages - 650 MB required
     2)  Install recommended packages - 1116 MB required
     3)  Install all packages - 1153 MB required
     4)  Display packages to be installed for each option

Select the packages to be installed on all systems? [1-4,q,?] (2) 3


Enter the Solaris x64 system names separated by spaces: [q,?] sol2 sol3

Logs are being written to /var/tmp/installer-201404231324onc while installer is
in progress

    Verifying systems: 100%

    Estimated time remaining: (mm:ss) 0:00                            8 of 8

    Checking system communication ..................................... Done
    Checking release compatibility .................................... Done
    Checking installed product ........................................ Done
    Checking prerequisite patches and packages ........................ Done
    Checking platform version ......................................... Done
    Checking file system free space ................................... Done
    Checking product licensing ........................................ Done
    Performing product prechecks ...................................... Done

System verification checks completed

The following warnings were discovered on the systems:

CPI WARNING V-9-40-4970 To avoid a potential reboot after installation, you
should modify the /etc/system file on sol2 with the appropriate values, and
reboot prior to package installation.

Appropriate /etc/system file entries are shown below:
        set lwp_default_stksize=0x6000
        set rpcmod:svc_default_stksize=0x6000

CPI WARNING V-9-40-4970 To avoid a potential reboot after installation, you
should modify the /etc/system file on sol3 with the appropriate values, and
reboot prior to package installation.

Appropriate /etc/system file entries are shown below:
        set lwp_default_stksize=0x6000
        set rpcmod:svc_default_stksize=0x6000

Do you want to continue? [y,n,q] (y)

The following Veritas Storage Foundation and High Availability packages will be
installed on all systems:

Package           Package Description

VRTSperl          Veritas Perl 5.14.2 Redistribution
VRTSvlic          Veritas Licensing
VRTSspt           Veritas Software Support Tools by Symantec
VRTSvxvm          Veritas Volume Manager Binaries
VRTSaslapm        Veritas Volume Manager - ASL/APM
VRTSob            Veritas Enterprise Administrator Service by Symantec
VRTSvxfs          Veritas File System
VRTSfssdk         Veritas File System Software Developer Kit
VRTSllt           Veritas Low Latency Transport
VRTSgab           Veritas Group Membership and Atomic Broadcast
VRTSvxfen         Veritas I/O Fencing by Symantec
VRTSamf           Veritas Asynchronous Monitoring Framework by Symantec
VRTSvcs           Veritas Cluster Server
VRTScps           Veritas Cluster Server - Coordinated Point Server
VRTSvcsag         Veritas Cluster Server Bundled Agents by Symantec
VRTSvcsea         Veritas Cluster Server Enterprise Agents by Symantec

Press [Enter] to continue:

VRTSdbed          Veritas Storage Foundation Databases
VRTSodm           Veritas Oracle Disk Manager
VRTSsfmh          Veritas Storage Foundation Managed Host by Symantec
VRTSvbs           Veritas Virtual Business Service
VRTSsfcpi601      Veritas Storage Foundation Installer

Press [Enter] to continue:


Logs are being written to /var/tmp/installer-201404231324onc while installer is
in progress

    Installing SFHA: 100%

    Estimated time remaining: (mm:ss) 0:00                          23 of 23

    Installing VRTSgab package ........................................ Done
    Installing VRTSvxfen package ...................................... Done
    Installing VRTSamf package ........................................ Done
    Installing VRTSvcs package ........................................ Done
    Installing VRTScps package ........................................ Done
    Installing VRTSvcsag package ...................................... Done
    Installing VRTSvcsea package ...................................... Done
    Installing VRTSdbed package ....................................... Done
    Installing VRTSodm package ........................................ Done
    Installing VRTSsfmh package ....................................... Done
    Installing VRTSvbs package ........................................ Done
    Installing VRTSsfcpi601 package ................................... Done
    Performing SFHA postinstall tasks ................................. Done

Veritas Storage Foundation and High Availability Install completed successfully


To comply with the terms of Symantec's End User License Agreement, you have 60
days to either:

 * Enter a valid license key matching the functionality in use on the systems
 * Enable keyless licensing and manage the systems with a Management Server. For
more details visit http://go.symantec.com/sfhakeyless. The product is fully
functional during these 60 days.

     1)  Enter a valid license key
     2)  Enable keyless licensing and complete system licensing later

How would you like to license the systems? [1-2,q] (2)

     1)  SF Standard HA
     2)  SF Enterprise HA
     b)  Back to previous menu

Select product mode to license: [1-2,b,q,?] (1)

Would you like to enable replication? [y,n,q] (n)

Registering SFHA license
SFHA vxkeyless key (SFHASTD) successfully registered on sol2
SFHA vxkeyless key (SFHASTD) successfully registered on sol3

The updates to VRTSaslapm package are released via the Symantec SORT web page:
https://sort.symantec.com/asl. To make sure you have the latest version of
VRTSaslapm (for up to date ASLs and APMs), download and install the latest
package from the SORT web page.

Veritas Storage Foundation and High Availability cannot be started without
configuration.

Run the '/opt/VRTS/install/installsfha601 -configure' command when you are ready
to configure Veritas Storage Foundation and High Availability.

Checking online updates for Veritas Storage Foundation and High Availability
6.0.1

        Attempted to connect to https://sort.symantec.com to check for product
updates, but connection failed.
        Please visit https://sort.symantec.com to check for available product
updates and information.


The following packages require reboot while installing them on sol2:
        VRTSvxfs

The following packages require reboot while installing them on sol3:
        VRTSvxfs

It is strongly recommended to reboot the following systems:
        sol2
        sol3

Execute '/usr/sbin/shutdown -y -i6 -g0' to properly restart your systems

installer log files, summary file, and response file are saved at:

        /opt/VRTS/install/logs/installer-201404231324onc

Would you like to view the summary file? [y,n,q] (n)




Tuesday, April 22, 2014

How to restart sshd in Solaris 10

# svcadm restart ssh

NAME
     svcadm - manipulate service instances

SYNOPSIS
     /usr/sbin/svcadm [-v] enable [-rst] {FMRI | pattern}...

     /usr/sbin/svcadm [-v] disable [-st] {FMRI | pattern}...

     /usr/sbin/svcadm [-v] restart {FMRI | pattern}...

     /usr/sbin/svcadm [-v] refresh {FMRI | pattern}...

     /usr/sbin/svcadm [-v] clear {FMRI | pattern}...

     /usr/sbin/svcadm [-v] mark [-It] instance_state
         {FMRI | pattern}...

     /usr/sbin/svcadm [-v] milestone [-d] milestone_FMRI

Install Veritas Cluster Server on Solaris 10

# uname -a
SunOS sol1 5.10 Generic_147148-26 i86pc i386 i86pc

# cd dvd2-sol_x64/sol10_x64/cluster_server

# ls
EULA          copyright     installvcs    tools         uninstallvcs

# ./installvcs

Enter y to agree to the End User License Agreement (EULA).

Do you agree with the terms of the End User License Agreement as specified in
the cluster_server/EULA/en/EULA_VCS_Ux_6.0.1.pdf file present on media?
[y,n,q,?] y

                  Veritas Cluster Server 6.0.1 Install Program

     1)  Install minimal required packages - 374 MB required
     2)  Install recommended packages - 628 MB required
     3)  Install all packages - 663 MB required
     4)  Display packages to be installed for each option

Select the packages to be installed on all systems? [1-4,q,?] (2) 3

Enter the Solaris x64 system names separated by spaces: [q,?] sol1

                  Veritas Cluster Server 6.0.1 Install Program
                                      sol1

Logs are being written to /var/tmp/installvcs-201404221225Cml while installvcs
is in progress

    Verifying systems: 87%                                           _______

    Estimated time remaining: (mm:ss) 0:05                            7 of 8

    Checking system communication ..................................... Done
    Checking release compatibility .................................... Done
    Checking installed product ........................................ Done
    Checking prerequisite patches and packages ........................ Done
    Checking platform version ......................................... Done
    Checking file system free space ................................... Done
    Checking product licensing ........................................ Done
    Performing product prechecks /


The following Veritas Cluster Server packages will be installed on all systems:

Package           Package Description

VRTSllt           Veritas Low Latency Transport
VRTSgab           Veritas Group Membership and Atomic Broadcast
VRTSvxfen         Veritas I/O Fencing by Symantec
VRTSamf           Veritas Asynchronous Monitoring Framework by Symantec
VRTSvcs           Veritas Cluster Server
VRTScps           Veritas Cluster Server - Coordinated Point Server
VRTSvcsag         Veritas Cluster Server Bundled Agents by Symantec
VRTSvcsea         Veritas Cluster Server Enterprise Agents by Symantec
VRTSvbs           Veritas Virtual Business Service

Press [Enter] to continue:


Logs are being written to /var/tmp/installvcs-201404221225Cml while installvcs
is in progress

    Installing VCS: 100%

    Estimated time remaining: (mm:ss) 0:00                          11 of 11

    Performing VCS preinstall tasks ................................... Done
    Installing VRTSllt package ........................................ Done
    Installing VRTSgab package ........................................ Done
    Installing VRTSvxfen package ...................................... Done
    Installing VRTSamf package ........................................ Done
    Installing VRTSvcs package ........................................ Done
    Installing VRTScps package ........................................ Done
    Installing VRTSvcsag package ...................................... Done
    Installing VRTSvcsea package ...................................... Done
    Installing VRTSvbs package ........................................ Done
    Performing VCS postinstall tasks .................................. Done

Veritas Cluster Server Install completed successfully

To comply with the terms of Symantec's End User License Agreement, you have 60
days to either:

 * Enter a valid license key matching the functionality in use on the systems
 * Enable keyless licensing and manage the systems with a Management Server. For
more details visit http://go.symantec.com/sfhakeyless. The product is fully
functional during these 60 days.

     1)  Enter a valid license key
     2)  Enable keyless licensing and complete system licensing later

How would you like to license the systems? [1-2,q] (2) 2

Would you like to enable the Global Cluster Option? [y,n,q] (n) n

Registering VCS license
VCS vxkeyless key (VCS) successfully registered on sol1

Would you like to configure VCS on sol1? [y,n,q] (n) n

Veritas Cluster Server cannot be started without configuration.

Run the '/opt/VRTS/install/installvcs601 -configure' command when you are ready
to configure Veritas Cluster Server.

Checking online updates for Veritas Cluster Server 6.0.1

        Attempted to connect to https://sort.symantec.com to check for product
updates, but connection failed.
        Please visit https://sort.symantec.com to check for available product
updates and information.


installvcs log files, summary file, and response file are saved at:

        /opt/VRTS/install/logs/installvcs-201404221225Cml

Would you like to view the summary file? [y,n,q] (n)

# pkginfo | grep -i vrts
system      VRTSamf                          Veritas Asynchronous Monitoring Framework Module
system      VRTSaslapm                       Array Support Libraries and Array Policy Modules for Veritas Volume Manager
optional    VRTScps                          Veritas Co-ordination Point Server by Symantec
application VRTSdbed                         Veritas Storage Foundation for Databases by Symantec
system      VRTSgab                          Veritas Group Membership and Atomic Broadcast by Symantec
system      VRTSllt                          Veritas Low Latency Transport by Symantec
application VRTSob                           Veritas Enterprise Administrator Service by Symantec
system      VRTSodm                          Veritas Oracle Disk Manager by Symantec
optional    VRTSperl                         Perl 5.14.2 for Veritas
optional    VRTSsfcpi601                     Veritas Storage Foundation Installer
application VRTSsfmh                         Veritas Operations Manager Managed Host by Symantec
application VRTSspt                          Veritas Software Support Tools by Symantec
optional    VRTSvbs                          Virtual Business Services by Symantec
system      VRTSvcs                          Veritas Cluster Server by Symantec
system      VRTSvcsag                        Veritas Cluster Server Bundled Agents by Symantec
system      VRTSvcsea                        Veritas High Availability Enterprise Agents by Symantec
application VRTSvlic                         Symantec License Utilities
system      VRTSvxfen                        Veritas I/O Fencing by Symantec
system      VRTSvxfs                         Veritas File System by Symantec
system      VRTSvxvm                         Binaries for VERITAS Volume Manager by Symantec




Veritas Cluster Server Notes

Definition of a Cluster

A clustered environment includes multiple components configured such that if one
component fails, its role can be taken over by another component to minimize or
avoid service interruption.

The term cluster, simply defined, refers to multiple independent systems or
domains connected into a management framework for increased availability.
Clusters have the following components:
• Up to 32 systems—sometimes referred to as nodes or servers
Each system runs its own operating system.
• A cluster interconnect, which allows for cluster communications
• A public network, connecting each system in the cluster to a LAN for client
access
• Shared storage (optional), accessible by each system in the cluster that needs to
run the application

Definition of Service Group

A service group is a virtual container that enables VCS to manage an application
service as a unit. The service group contains all the hardware and software
components required to run the service, which enables VCS to coordinate failover
of the application service resources in the event of failure or at the administrator’s
request.
A service group is defined by these attributes:
• The cluster-wide unique name of the group
• The list of the resources in the service group, usually determined by which
resources are needed to run a specific application service
• The dependency relationships between the resources
• The list of cluster systems on which the group is allowed to run
• The list of cluster systems on which you want the group to start automatically

Service Group Types
Service groups can be one of three types:
• Failover
This service group runs on one system at a time in the cluster. Most application
services, such as database and NFS servers, use this type of group.
• Parallel
This service group runs simultaneously on more than one system in the cluster.
This type of service group requires an application that can be started on more
than one system at a time without threat of data corruption.
• Hybrid (4.x)
A hybrid service group is a combination of a failover service group and a
parallel service group used in VCS 4.x replicated data clusters (RDCs), which
are based on VERITAS Volume Replicator. This service group behaves as a
failover group within a defined set of systems, and a parallel service group
within a different set of systems. RDC configurations are described in the
VERITAS Disaster Recovery Using VVR and Global Cluster Option course.

Definition of a Resource
Resources are VCS objects that correspond to hardware or software components,
such as the application, the networking components, and the storage components.
VCS controls resources through these actions:
• Bringing a resource online (starting)
• Taking a resource offline (stopping)
• Monitoring a resource (probing)
Resource Categories
• Persistent
– None
VCS can only monitor persistent resources—they cannot be brought online
or taken offline. The most common example of a persistent resource is a
network interface card (NIC), because it must be present but cannot be
stopped. FileNone and ElifNone are other examples.
– On-only
VCS brings the resource online if required, but does not stop it if the
associated service group is taken offline. NFS daemons are examples of
on-only resources. FileOnOnly is another on-only example.
• Nonpersistent, also known as on-off
Most resources fall into this category, meaning that VCS brings them online
and takes them offline as required. Examples are Mount, IP, and Process.
FileOnOff is an example of a test version of this resource.

Resource Dependencies
Resources depend on other resources because of application or operating system
requirements. Dependencies are defined to configure VCS for these requirements.
Dependency Rules
These rules apply to resource dependencies:
• A parent resource depends on a child resource. In the diagram, the Mount
resource (parent) depends on the Volume resource (child). This dependency
illustrates the operating system requirement that a file system cannot be
mounted without the Volume resource being available.
• Dependencies are homogenous. Resources can only depend on other
resources.
• No cyclical dependencies are allowed. There must be a clearly defined
starting point.

Agents: How VCS Controls Resources
Agents are processes that control resources. Each resource type has a
corresponding agent that manages all resources of that resource type. Each cluster
system runs only one agent process for each active resource type, no matter how
many individual resources of that type are in use.
Agents control resources using a defined set of actions, also called entry points.
The four entry points common to most agents are:
• Online: Resource startup
• Offline: Resource shutdown
• Monitor: Probing the resource to retrieve status
• Clean: Killing the resource or cleaning up as necessary when a resource fails to
be taken offline gracefully
The difference between offline and clean is that offline is an orderly termination
and clean is a forced termination. In UNIX, this can be thought of as the difference
between exiting an application and sending the kill -9 command to the
process.
Each resource type needs a different way to be controlled. To accomplish this, each
agent has a set of predefined entry points that specify how to perform each of the
four actions. For example, the startup entry point of the Mount agent mounts a
block device on a directory, whereas the startup entry point of the IP agent uses the
ifconfig command to set the IP address on a unique IP alias on the network
interface.
VCS provides both predefined agents and the ability to create custom agents.

Cluster Communication
VCS requires a cluster communication channel between systems in a cluster to
serve as the cluster interconnect. This communication channel is also sometimes
referred to as the private network because it is often implemented using a
dedicated Ethernet network.
VERITAS recommends that you use a minimum of two dedicated communication
channels with separate infrastructures—for example, multiple NICs and separate
network hubs—to implement a highly available cluster interconnect. Although
recommended, this configuration is not required.
The cluster interconnect has two primary purposes:
• Determine cluster membership: Membership in a cluster is determined by
systems sending and receiving heartbeats (signals) on the cluster interconnect.
This enables VCS to determine which systems are active members of the
cluster and which systems are joining or leaving the cluster.
In order to take corrective action on node failure, surviving members must
agree when a node has departed. This membership needs to be accurate and
coordinated among active members—nodes can be rebooted, powered off,
faulted, and added to the cluster at any time.
• Maintain a distributed configuration: Cluster configuration and status
information for every resource and service group in the cluster is distributed
dynamically to all systems in the cluster.
Cluster communication is handled by the Group Membership Services/Atomic
Broadcast (GAB) mechanism and the Low Latency Transport (LLT) protocol

Low-Latency Transport
VERITAS uses a high-performance, low-latency protocol for cluster
communications. LLT is designed for the high-bandwidth and low-latency needs
of not only VERITAS Cluster Server, but also VERITAS Cluster File System, in
addition to Oracle Cache Fusion traffic in Oracle RAC configurations. LLT runs
directly on top of the Data Link Provider Interface (DLPI) layer over Ethernet and
has several major functions:
• Sending and receiving heartbeats over network links
• Monitoring and transporting network traffic over multiple network links to
every active system
• Balancing cluster communication load over multiple links
• Maintaining the state of communication
• Providing a nonroutable transport mechanism for cluster communications

Group Membership Services/Atomic Broadcast (GAB)
GAB provides the following:
• Group Membership Services: GAB maintains the overall cluster
membership by way of its Group Membership Services function. Cluster
membership is determined by tracking the heartbeat messages sent and
received by LLT on all systems in the cluster over the cluster interconnect.
Heartbeats are the mechanism VCS uses to determine whether a system is an
active member of the cluster, joining the cluster, or leaving the cluster. If a
system stops sending heartbeats, GAB determines that the system has departed
the cluster.
• Atomic Broadcast: Cluster configuration and status information are
distributed dynamically to all systems in the cluster using GAB’s Atomic
Broadcast feature. Atomic Broadcast ensures all active systems receive all
messages for every resource and service group in the cluster.

The Fencing Driver
The fencing driver prevents multiple systems from accessing the same Volume
Manager-controlled shared storage devices in the event that the cluster
interconnect is severed. In the example of a two-node cluster displayed in the
diagram, if the cluster interconnect fails, each system stops receiving heartbeats
from the other system.
GAB on each system determines that the other system has failed and passes the
cluster membership change to the fencing module.
The fencing modules on both systems contend for control of the disks according to
an internal algorithm. The losing system is forced to panic and reboot. The
winning system is now the only member of the cluster, and it fences off the shared
data disks so that only systems that are still part of the cluster membership (only
one system in this example) can access the shared storage.
The winning system takes corrective action as specified within the cluster
configuration, such as bringing service groups online that were previously running
on the losing system.

The High Availability Daemon
The VCS engine, also referred to as the high availability daemon (had), is the
primary VCS process running on each cluster system.
HAD tracks all changes in cluster configuration and resource status by
communicating with GAB. HAD manages all application services (by way of
agents) whether the cluster has one or many systems.
Building on the knowledge that the agents manage individual resources, you can
think of HAD as the manager of the agents. HAD uses the agents to monitor the
status of all resources on all nodes.
This modularity between had and the agents allows for efficiency of roles:
• HAD does not need to know how to start up Oracle or any other applications
that can come under VCS control.
• Similarly, the agents do not need to make cluster-wide decisions.
This modularity allows a new application to come under VCS control simply by
adding a new agent—no changes to the VCS engine are required.
On each active cluster system, HAD updates all the other cluster systems of
changes to the configuration or status.
In order to ensure that the had daemon is highly available, a companion daemon,
hashadow, monitors had and if had fails, hashadow attempts to restart it.
Likewise, had restarts hashadow if hashadow stops.

Maintaining the Cluster Configuration
HAD maintains configuration and state information for all cluster resources in
memory on each cluster system. Cluster state refers to tracking the status of all
resources and service groups in the cluster. When any change to the cluster
configuration occurs, such as the addition of a resource to a service group, HAD
on the initiating system sends a message to HAD on each member of the cluster by
way of GAB atomic broadcast, to ensure that each system has an identical view of
the cluster.
Atomic means that all systems receive updates, or all systems are rolled back to the
previous state, much like a database atomic commit.
The cluster configuration in memory is created from the main.cf file on disk in
the case where HAD is not currently running on any cluster systems, so there is no
configuration in memory. When you start VCS on the first cluster system, HAD
builds the configuration in memory on that system from the main.cf file.
Changes to a running configuration (in memory) are saved to disk in main.cf
when certain operations occur.

Networking
VERITAS Cluster Server requires a minimum of two heartbeat channels for the
cluster interconnect, one of which must be an Ethernet network connection. While
it is possible to use a single network and a disk heartbeat, the best practice
configuration is two or more network links.
Loss of the cluster interconnect results in downtime, and in nonfencing
environments, can result in split brain condition
For a highly available configuration, each system in the cluster must have a
minimum of two physically independent Ethernet connections for the cluster
interconnect:
• Two-system clusters can use crossover cables.
• Clusters with three or more systems require hubs or switches.
• You can use layer 2 switches; however, this is not a requirement.

Shared Storage
VCS is designed primarily as a shared data high availability product; however, you
can configure a cluster that has no shared storage.
For shared storage clusters, consider these requirements and recommendations:
• One HBA minimum for nonshared disks, such as system (boot) disks
To eliminate single points of failure, it is recommended to use two HBAs to
connect to the internal disks and to mirror the system disk.
• One HBA minimum for shared disks
› To eliminate single points of failure, it is recommended to have two
HBAs to connect to shared disks and to use a dynamic multipathing
software, such as VERITAS Volume Manager DMP.
› Use multiple single-port HBAs or SCSI controllers rather than
multiport interfaces to avoid single points of failure.
• Shared storage on a SAN must reside in the same zone as all of the nodes in the
cluster.
• Data residing on shared storage should be mirrored or protected by a hardwarebased
RAID mechanism.
• Use redundant storage and paths.
• Include all cluster-controlled data in your backup planning and
implementation. Periodically test restoration of critical data to ensure that the
data can be restored.








Disable abort sequence (STOP-A) on Solaris systems

Consider disabling the abort sequence on Solaris systems. When a Solaris system in a VCS cluster is halted with the abort sequence (STOP-A), it stops producing VCS heartbeats. To disable the abort sequence on Solaris systems, add the following line to the /etc/default/kbd file (create the file if it does not exist):

KEYBOARD_ABORT=disable

After the abort sequence is disabled, reboot the system.

Saturday, April 19, 2014

Management of iSCSI initiators : Connect to a iSCSI target on Solaris 10

Add the target to a list of discovery addresses.
The IP address is the server where we have the iscsi target :

iscsiadm add discovery-address IP_ADDRESS

This command won't produce any output.

# iscsiadm add discovery-address 192.168.1.192

To see if the server has any iscsi device available for others, let's do :
# iscsiadm list discovery-address -v

the output  :
Discovery Address: 192.168.1.192:3260
        Target name: iqn.2006-01.com.openfiler:tsn.24def7fd4a0a
                Target address:   192.168.1.192:3260, 1


To use the iSCSI target :

# iscsiadm add static-config NAME_OF_ISCSI_TARGET,IP_ADDRESS_OF_ISCSI_TARGET;

example :
# iscsiadm add static-config iqn.2006-01.com.openfiler:tsn.24def7fd4a0a,192.168.1.192

We have to check if static-config service is enable. Check this out with :
# iscsiadm list discovery

Our output  :
Discovery:
        Static: disabled
        Send Targets: disabled
        iSNS: disabled

We won't be able to see the iSCSI drive. 
To be able to use it, we have to enable the static service :
# iscsiadm modify discovery --static enable

and the ouput  :
Discovery:
        Static: enabled
        Send Targets: disabled
        iSNS: disabled

now, we can see the new iSCSI drive, check it out with format if the new drive shows up

# format
:
:
       3. c2t1d0 <OPNFILE-VIRTUAL-DISK   -0    cyl 1021 alt 2 hd 64 sec 32>
          /iscsi/disk@0000iqn.2006-01.com.openfiler%3Atsn.24def7fd4a0a0001,0
:
:




Install Openfiler and create iSCSI target














Connect to https://IP:446
Login as openfiler
password : password














System -> Network Access Configuration
Create a name for the client IP (will connect to iSCSI target)




Veritas Volume Manager VxVM not started after reboot Solaris 10

The presence of /etc/vx/reconfig.d/state.d/install-db will prevent Volume Manager from starting up. This is a flag file and is only checked by vxvm-recover, vxvm-startup1, vxvm-startup2 and vxvm-sysboot scripts. On Solaris 10 these files can be found in /lib/svc/method.

To correct this problem, remove the follwing file:

rm /etc/vx/reconfig.d/state.d/install-db

and then reboot the system. VxVM should start now.


Tuesday, April 15, 2014

Use ZFS snapshot and rollback file system with snapshot on Solaris 10

# zfs snapshot -r labpool/zman@snap1
# zfs list -r -t all labpool
NAME                 USED  AVAIL  REFER  MOUNTPOINT
labpool             2.82M  1.11G  46.4K  /labpool
labpool/zman        2.71M  1.11G  2.71M  /labpool/zman
labpool/zman@snap1      0      -  2.71M  -
# ls -al /labpool/zman/man1/zip.1
-r--r--r--   1 root     bin        85963 Feb 29  2012 /labpool/zman/man1/zip.1

Now we delete this file

# rm /labpool/zman/man1/zip.1
# ls -al /labpool/zman/man1/zip.1
/labpool/zman/man1/zip.1: No such file or directory

Now we rollback

# zfs rollback labpool/zman@snap1
# ls -al /labpool/zman/man1/zip.1
-r--r--r--   1 root     bin        85963 Feb 29  2012 /labpool/zman/man1/zip.1

Now we create a clone

# zfs clone labpool/zman@snap1 labpool/zmanclone
# zfs list -r -t all labpool
NAME                 USED  AVAIL  REFER  MOUNTPOINT
labpool             2.87M  1.11G  46.4K  /labpool
labpool/zman        2.71M  1.11G  2.71M  /labpool/zman
labpool/zman@snap1  1.50K      -  2.71M  -
labpool/zmanclone   1.50K  1.11G  2.71M  /labpool/zmanclone