Eyeglass All Product Installation and Upgrade Guides Publication

Eyeglass Golden Copy Installation and Upgrade Guide

Home


Overview

Golden Copy VM contains all management and GUI functions and can copy data directly from a single VM.  Additional Virtual accelerator nodes (VAN's) can be deployed to scale out the performance of the copy jobs.  

Requirements

  1. vCenter 6.x, 6.5 and 7.0.1 Build: 17491160 
  2. Supported Browsers for GUI:  Chrome Windows OS, Edge (chromium edition only)

VM Specifications for 3 Scaling Configurations 

  1. Small Configuration Lab testing - 1 x VM with 4x vCPU , 16G of ram, 400G hard disk
    1. disk latency reads and writes < 20 ms (test with command iostat -xyz -d 3) 
  2. Small Configuration Production Use with default VM resources 1 x VM with 4x vCPU , 16G of ram, 400G hard disk 
    1. Limit of 4 folder definitions
    2. > 4 folder definitions requires additional disk space to store file copy history for each folder.  Additional 110 GB for 10 folders added
    3. NOTE: Multi VM deployments provide additional disk space on the VM cluster for storing file copy history 
    4. disk latency read and write latency < 20 ms  (test with command iostat -xyz -d 3)
  3. Vertical Scaling high Performance Archiving - 1 x VM with 12x vCPU, 32 G of ram, 600 G hard disk
    1. Before power on, modify RAM and CPU to match above settings
    2. > 4 folder definitions requires additional disk space to store file copy history for each folder.  Additional 110 GB for 10 folders added
    3. disk latency read and write latency < 20 ms  (test with command iostat -xyz -d 3)
    4. modify the following file to expand the parallel file copies per VM
      1. nano /opt/superna/eca/eca-env-common.conf
      2. Add a line
        1. export ARCHIVE_PARALLEL_THREAD_COUNT=400
      3. control+x  to save and exit
      4. Change memory configuration (note the the spacing must be Exactly as shown below)
        control+x to save and exit 
      5. nano /opt/superna/eca/docker-compose.overrides
      6. version: '2.4'
        services:
            indexworker:
              mem_limit: 8GB
              mem_reservation: 8GB
              memswap_limit: 8GB

            archiveworker:
              mem_limit: 8GB
              mem_reservation: 8GB
              memswap_limit: 8GB

            kafka:
              mem_limit: 4GB
              mem_reservation: 4GB
              memswap_limit: 4GB

  4. Scale out Performance high performance and concurrent copy jobs -  6 x VM with 4x vCPU , 16G of ram, 400G hard disk 
    1. > 4 folder definitions requires additional disk space to store file copy history for each folder.  Additional 110 GB for 10 folders added
    2. disk latency read and write latency < 20 ms  (test with command iotstat -xyz)

Cloud Storage Network Requirements

  1. Direct NAT (private ip to public IP) network
  2. Proxy configuration not currently supported

Firewall Rules and Direction Table

NOTE: These rules apply to traffic into the VM and out of the VM. All ports must be open between VM's, private VLAN's and firewall between VM's is not supported.

Port

Direction

Function

Operating System Open Suse 15.1

It is customer responsibility to patch the operating system and allow Internet repository access for automatic patching. The OS is not covered by the support agreement.

ping
Golden Copy node 1 to PowerScaleUsed to verify reachability before adding a cluster to Golden Copy

22

Admin PC → Golden Copy VM 

Management access to the CLI

https 443
Admin PC --> Golden Copy VMManagement GUIa

8080 (HTTPS) and 22 SSH

Golden Copy VM → PowerScale

REST API Access and SSH

NFS UDP/TCP port 111, TCP and UDP port 2049 

UDP 300

Golden Copy VM PowerScale

Virtual Accelerator nodes PowerScale 

NFS mount in System Zone
port 9020 9021 for Dell ECSGolden Copy VM and VAN VM's -> S3 (https 9021) (http 9020) S3 protocol (https 9021, http 9020) 
AWSGolden Copy VM and VAN VM's -> S3 (https 443)
S3 protocol (https 443) 
Azure BlobGolden Copy VM and VAN VM's -> Azure Blob rest api  https 443
Azure blob storage rest api

Firewall Diagram



    Isilon/Power Scale Cluster NFS Mount Preparation Steps (Mandatory)

    1. An IP pool created in the System access zone that with at least 3 Nodes as members.  Must have DNS smartconnect name assigned to a management IP pool in the System zone for the NFS export used to read content from the snapshots folder and a 2nd NFS export for data recall.
    2. Get each Cluster GUID and name that will be indexed.  Record these values for steps below.
      1. Login to the cluster OneFS GUI and open the Cluster Management --> General settings menu and record the cluster GUID and cluster name. Example below.
    3. Repeat for each cluster that will be licensed and used as a source cluster to copy data.
    4. Create an NFS export in the System Access Zone for full content on all clusters that will be used as a source for archiving data.  See example below where the IP addresses entered are the Golden Copy VM's. The export is created on the /ifs/.snapshot directory with root client list and clients list.  Add Golden Copy and all Virtual Accelerator Node IP addresses.

    5. Create the recall NFS folder /ifs/goldencopy/recall using the cluster root user over ssh.  Then create the export.
      1.  
    6. Done



    Eyeglass Golden Copy Service Account Preparation Steps (Mandatory)

    The Golden Copy appliance is based on the ECA cluster architecture and has similar steps to complete installation.

    Before you begin, collect the information below and verify all prerequisites are completed:

    1. Permissions for Service Account: PowerScale REST API access with file traverse permissions to ingest files and directories by the Golden Copy VM. See the minimum permissions guide for a full list of permissions required for the eyeglassSR service account used by all ECA cluster products. Guide here.
     

    Golden Copy OVA Deployment and Cluster VM Configuration (Mandatory)

    1. Download the OVA following instructions here
    2. Deploy with vCenter
      NOTE:
      vCenter 6.5 and 6.7 use FLASH or FLEX interface.  HTML5 interface is not supported.
      1. Select the OVA file.
      2. Set the node ip addresses, gateway IP, DNS, NTP IP
      3. Set the ECA cluster name (no special characters should be used, all lower case)
      4. NOTE: If using vertical scaling configuration edit the VM configuration with 12 vCPU and 32G of ram before power on.
      5. Power on the OVA
      6. SSH to the node 1 IP address
      7. Login with user ecaadmin and default password 3y3gl4ss
    3. Start up the cluster
    4. ecactl cluster up
    5. Get the appliance id and make a record of it - will be required to retrieve the license.  
      1. ecactl version
    6. Deployment done
    7. NEXT Steps Golden Copy Cluster Logical Configuration
    8. Configuration steps to add licenses, add clusters,  add archive folders is covered in the Quick Start Steps of the Golden Copy Admin guide.

    How to Deploy Virtual Accelerator Nodes (VAN's) (Optional)

    This node type is optional and allows distributed scale out copy performance.  The Golden Copy VM can copy files without VAN VM's deployed.

    NOTE: VAN deployment requires 6 VM's

    1. Download the OVA following instructions here
    2. Deploy with vCenter
      NOTE: 
      vCenter 6.5 and 6.7 use FLASH or FLEX interface.  HTML5 interface is not supported.
      1. Select the OVA file.
      2. Set the node ip addresses, gateway IP, DNS, NTP IP
    3. Set the ECA cluster name (no special characters should be used, all lower case)
    4. Repeat 6 times to deploy all 6 VM's
    5. Power on the VM's
    6. SSH to the Golden Copy VM node 1 (first ip address VM deployed)
    7. Login with user ecaadmin and default password 3y3gl4ss
    8. Add each VM ip from node 1 using the command below:
      1. ecactl cluster add-node <ip_of_new_node>    (note all 6 VM's must be booted and pingable)
    9. Upgrade each VM to the same release
      1. Download the upgrade file to each VAN vm and run the installer after making it executable with chmod 777 /home/ecaadmin/upgradefilename.run
      2. Run the upgrade
        1. ./home/ecaadmin/upgradefilename.run 
      3. complete the upgrade on all VM's
    10. ecactl cluster up  (from node 1)
    11. Verify boot process executes on all nodes in the cluster
    12. This will now allow copy jobs to use additional VAN's to copy files.
    13. Manage configuration from node 1 only.


    Golden Copy and VAN VM node NFS Mount Configuration (Mandatory)

    1. Golden Copy uses PowerScale snapshots to copy content. Follow the steps below to add the NFS export to each of the VM's that was created in the steps above.  2 NFS mounts are required, 1 for copying data and one for recalling data.
    2. NOTE: An advanced configuration that enables SMB3 mount option with encryption is documented here.   Consult with support before using this configuration.
    3. You will need to complete this steps on all nodes
      1. Cluster GUID and cluster name for each licensed cluster
      2. Cluster name as shown on top right corner after login to OneFS GUI 
    4. Change to Root user
      1. ssh to each VM as ecaadmin over ssh
      2. sudo -s
      3. enter ecaadmin password 3y3gl4ss
    5. Create local mount directory (repeat for each Isilon cluster) 
      1. mkdir -p /opt/superna/mnt/search/GUID/clusternamehere/    (replace GUID and clusternamehere with correct values)
      2. mkdir -p /opt/superna/mnt/recall/GUID/clusternamehere
      3. (Only if you have Virtual accelerator nodes, otherwise skip) Use this command to run against all Golden Copy nodes, you will be prompted for ecaadmin password on each node.
        1. NOTE: Must run from the Golden Copy VM and all VAN VM's must be added to the eca-env-common.conf file.
        2. NOTE:  example only.  
        3. ecactl cluster exec "sudo mkdir -p /opt/superna/mnt/search/00505699937a5e1f5b5d8b2342c2c3fe9fd7/clustername"
        4. ecactl cluster exec "sudo mkdir -p /opt/superna/mnt/recall/00505699937a5e1f5b5d8b2342c2c3fe9fd7/clustername"
    6. Configure automatic NFS mount After reboot
      1. Prerequisites 
        1. This will add a mount for content indexing to FSTAB on all nodes
        2. Build the mount command using cluster guid and cluster name replacing the yellow highlighted sections with correct values for your cluster. NOTE: This is only an example 
        3. You will need a smartconnect name to mount the snapshot folder on the cluster.  The Smartconnect name should be a system zone IP pool
        4. Replace smartconnect FQDN and <> with a DNS smartconnect name
        5. Replace <GUID> with cluster GUID
        6. Replace <name>  with the cluster name 
      2. On each VM in the Golden Copy cluster:
        1. ssh to the node as ecaadmin
        2. sudo -s
        3. enter ecaadmin password
        4. echo '<CLUSTER_NFS_FQDN>:/ifs/.snapshot /opt/superna/mnt/search/<GUID>/<NAME> nfs defaults,nfsvers=3 0 0'| sudo tee -a /etc/fstab
        5. echo '<CLUSTER_NFS_FQDN>:/ifs/goldencopy/recall /opt/superna/mnt/recall/<GUID>/<NAME> nfs defaults,nfsvers=3 0 0'| sudo tee -a /etc/fstab
        6. mount -a
        7. mount to verify the mount
        8. exit
        9. Login to next node via ssh
      3. repeat steps on each VM
    7. done

    How to Configure Multi Golden Copy VM Parallel Copy (Mandatory)

    1. Vertically scaled VM or multi Golden Copy VM deployments
      1. The default deployment limits concurrent copies to 1 folder with a full or incremental job running.  This must be changed for multi VM deployments to allow multiple folders to execute concurrent jobs (full or incremental). 
    2. Single VM Limitations:
      1. Single VM deployment is only supported with single folder concurrent job execution.
    3. Steps to enable Job Concurrency
      1. Login to VM node 1 as eccadmin
      2. nano /opt/superna/eca/eca-env-common.conf
      3. Copy and paste the following settings shown below.  This enables full or incremental on up to 30 folders defined within Golden Copy across all clusters added to Golden Copy.
        1. Consult product supported limits of jobs in the admin guide.


    # for blocking parallel jobs of any kind, true (enabled) by default
    export ARCHIVE_BLOCK_PARALLEL_JOBS=false
    # number of parallel full archive jobs allowed, works if `ARCHIVE_BLOCK_PARALLEL_JOBS` is disabled
    export ARCHIVE_FULL_PARALLEL_JOBS_ALLOWED=30
    # number of parallel incremental archive jobs allowed, works if `ARCHIVE_BLOCK_PARALLEL_JOBS` is disabled
    export ARCHIVE_INCREMENTAL_PARALLEL_JOBS_ALLOWED=30
    # total number of parallel jobs allowed, defaults to 1 FULL and 1 INCREMENTAL
    export ARCHIVE_TOTAL_JOBS_ALLOWED=60



    How To Upgrade Golden Copy Cluster

    Offline Cluster No Internet Method

    1. Login to the support site https://support.superna.net and download the offline Golden Copy upgrade file
    2. Mandatory step:
      1. Take a vmware level snapshot of the appliance VM's before starting the upgrade for roll back.
    3.  Read Me First
      1. 1.1.16 and later releases use the configuration backup command before upgrading to protect folder configurations and job history
      2. searchctl settings config export
    4. Shutdown the cluster
      1. Login to node 1 as ecaadmin over ssh
      2. run the command "ecactl cluster down"  
      3. wait for the cluster to shutdown
      4. Modify the install file copied to the cluster node 1
        1. Assuming the file was copied to default location in /home/ecaadmin
        2. cd /home/ecaadmin
        3. chmod 777 <name of install file here>
      5. Run the installer
        1. ./name of install file here
        2. when prompted enter the ecaadmin password
        3. wait for all nodes to be upgraded to the new version
        4. NOTE: Restore custom memory configurations 
          1. Delete any memory related lines from files below.  Use control+x to save and exit after deleting the 3 lines highlighted below.

          2. nano /opt/superna/eca/templates/docker-compose.gc.yml
          3. nano /opt/superna/eca/templates/docker-compose.gc_multi_node.yml
          4. nano /opt/superna/eca/docker-compose.yml remove 3 lines for memory in archiveworker section.
          5. For changes to take effect
            1. echo y | ecactl cluster push-config
      6. Start the cluster
        1. ecactl cluster up
        2. wait until all nodes are started
      7. done.


    © Superna Inc