Administration Guides
Backup and DR for Audit Database with SyncIQ to a Remote Cluster
Home

Backup and DR for  Audit Database with SyncIQ to a Remote Cluster

This solution backups the Audit Database to a remote cluster to provide a remote backup and DR copy at the same time.

SyncIQ to replicate Audit Database to a remote Isilon Cluster

  1. Create a SyncIQ Policy to replicate audit database to a directory under  the HDFS root directory with replication schedule . Example:

Access zone basepath for audit database is on source and target cluster  /ifs/data/igls/analyticsdb/.

HDFS root directory: /ifs/data/igls/analyticsdb/eca

SyncIQ Policy Source Path: /ifs/data/igls/analyticsdb/eca/ecahbase

SyncIQ Policy Target Path on remote cluster: /ifs/data/igls/analyticsdb/eca/ecahbase

Recommended policy Schedule:  once a day at noon, 7 days a week

  1. When creating  a  local Hadoop user (eyeglasshdfs)  in the System access zone of the remote Isilon cluster as per Preparation of Analytics Database Cluster documentation, specify the same UID as the local Isilon cluster’s hadoop user. Example:

isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system --uid=eyeglasshdfs_uid_on_local_isilon

  1. The SyncIQ Policy Target path on remote Isilon cluster must have the same  ownership and permission as the source database.

chown -R eyeglasshdfs:'Isilon Users' /ifs/data/igls/analyticsdb/eca/ecahbase

chmod -R 755 /ifs/data/igls/analyticsdb/eca/ecahbase

Restore the Audit Database with SyncIQ to a remote Data Center

This procedure assumes an ECA cluster will be deployed at the remote location to use the database copy.

The ECA cluster at the remote location must meet the following requirements:

  1. Number of ECA Cluster nodes deployed in remote location must be the same as the local ECA Cluster nodes. Example: if the local ECA Cluster was configured as 3 nodes, the remote ECA cluster also need to be configured as 3 nodes.
  2. The remote ECA Cluster ID must be the same as the local ECA cluster ID.  We can verify that the ECA_CLUSTER_ID setting in /opt/superna/eca/eca-env-common.conf file of this remote ECA cluster has the same ID as the source ECA Cluster. Example:

export ECA_CLUSTER_ID=eca_local_cluster_id

  1. The ISILON_HDFS_ROOT of this remote ECA Cluster must be configured to point to a directory under the HDFS root directory. This is also related to the SyncIQ policy that also need to be configured to replicate the Hbase DB to this directory under the HDFS root directory mentioned in the previous section. Example: configure the /opt/superna/eca/eca-env-common.conf

export ISILON_HDFS_ROOT='hdfs://hdfs_smartconnect_zone_name:8020/ecahbase'

Procedure:

  1. User Eyeglass Per SyncIQ policy failover with DR Assistant to failover the Audit Database policy.  This will automate all steps for the SyncIQ policy
  2. (Manual method) Change SyncIQ Policy for audit database replication  schedule to manual.
  3. (Manual method) Make the SyncIQ path on remote Isilon Cluster as writeable using the allow write option on the local target policy on the remote cluster (see Isilon Documentation)
  4. Ensure that the replicated audit database folder has the correct ownership and permission setting (eyeglasshdfs user)
    1. Login to cluster as root change directory to the database directory and type ‘ls -la’  this wil list file ownership and should show the eyeglasshdfs user listed.
    2. Audit Re-Configuration Steps to get fully operational
    3. The cluster(s) that were monitored by the ECA will need the /ifs/.var export updated to include the new ip addresses of the new ECA nodes.
    4. This procedures assumes NFS automount is enabled and now manual NFS mounts are required to be changed.
    5. Bring up this remote ECA Cluster. Command: ecactl cluster up
    6. ssh to ECA master node (node 1)
    7. Login as ecaadmin
    8. Run command: ecactl cluster up.
      Verify the NFS mount is successful
    9. Login to node 1 to 3 and verify the mount is visible by typing ‘mount’
    10. NOTE: During cluster up uncommitted transactions are replayed to the database, this can be seen from the HBASE Region server GUI logs http://x.x.x.x:16030  this can take longer to startup the cluster
    11. Sample below
    12. Verify that ECA Cluster is up and audit database status return no error. Command: ecactl db shell

2017-11-10 01:56:33,254 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

hbase(main):001:0> status

1 active master, 2 backup masters, 3 servers, 0 dead, 2.6667 average load

Copyright Superna LLC