All Product Installation and Upgrade Guides

How to Migrate Eyeglass Search Appliance Index from ECA OpenSuse 15.1 to OpenSuse 15.3 OS

Home


When to use this procedure


This procedure allow moving an index from an old appliance with 15.1 OS to a new appliance running opensuse 15.3.  It will require VMware access to edit VM's and attach VMDK disk from the old appliance to the new appliance.


Pre-Migrate steps

  1. Clone the existing VM to protect the index in case a roll back is required or backup the VM with vmware backup tools.
  2. NOTE:  No snapshots can exist on the VM for roll back since the VMDK disks cannot be moved to another VM if snapshots exist.
  3. NOTE:  if no backup has been completed and issues impact the index disk during migration the only method to revert requires a backup or clone of the VMDK.  The other option is re-index the data using the new appliance vs backup or cloning the old appliance.

High Level steps

  1. deploy Superna Search appliance
  2. configure NFS mounts
  3. Create a backup of old appliance
  4. power on new vApp to make sure all nodes are up and reachable, power off
  5. remove without delete hard disk 2 from old vApp node 1-N
  6. remove/delete hard disk 2 on node1-4 of new vApp and add removed hard disks from old vApp to new vApp node 1-N
  7. power on new vApp
  8. copy zip to new vApp and restore configuration from backup

Detailed Steps


  1. Create Search Cluster backup on old appliance:
    1. SSH to ECA node 1 as user: ecaadmin
    2. Type command: ecactl cluster backup
    3. Follow this procedure to retrieve this backup file. Find the section on how to retrieve the backup file on this link.
  2. Create logs folder under /opt/data/superna on all ECA nodes
    1. cd /opt/data/superna
    2. mkdir logs
    3. chmod 775 logs
    4. Repeat the above steps on all ECA nodes
  3. Bring down old ECA cluster
    1. SSH to ECA node 1 as user: ecaadmin
    2.  Type command: ecactl cluster down
  4. Using vCenter UI, power off the vApp
    1. Download new Eyeglass Search OVF based on OpenSuse 15.3 and deploy these vApp, as per documented install procedure 
  5. Configure the new Eyeglass Search vApp to have the same configuration as the old Eyeglass Search vApp. Assign the following when deploying new vApp
    1. Same ECA cluster name
    2. Same IP Addresses for ECA nodes
  6. Once the deployment of new vApp has been completed, power on this new vApp and then
    1. SSH to this new ECA node 1 as user ecaadmin
    2. Type command: ecactl components configure-nodes
    3. Edit the Search Cluster backup zip file (Open this zip file by using zip tool utility e.g. 7-zip) to remove known_host file from each node folders in that backup zip file, under path /<node-x>/home/ecaadmin/.ssh/
    4. Copy the updated Search cluster backup zip file to this new ECA node 1. Use WinSCP
    5. Restore from the backup use command: ecactl cluster restore --path <path-to-copied-backup-file>
  7. Once restore has completed, create local directory on ECA node 2 - last node for mounting PowerScale Snapshot NFS export  (Only If require Content Ingestion
    1. ssh to ECA node 1 as user ecaadmin
    2. sudo su -
    3. mkdir -p /opt/superna/mnt/search/<GUID-of-PowerScale-Cluster>/<Cluster-name>
    4. Repeat the above steps for ECA node 3 - last node
    5. Modify file /etc/fstab on ECA node 2 - ECA last node.
    6. Open the copied fstab file from old ECA node 2  and copy the mount to the PowerScale Snapshot folder setting line and insert this into the fstab file on new ECA node 2 - last node
    7. On each node (Node 2 - last node) complete these steps:
    8. ssh ecaadmin@x.x.x.x (ip of each eca node)
    9. sudo -s (enter ecaadmin password when prompted)
    10. nano /etc/fstab
    11. paste mount line into the file  control+x to save and exit
    12. Test the mount in fstab on the node
    13. NOTE: you should still be the root user from above steps
    14. type command --> mount -a
    15. if no mount error you should not see any output from this command
    16. Check mount and type --> mount [enter]
    17. Review the output to make sure the mount is visible
    18. Repeat all steps above on ECA node 2 - last node
  8. Upgrade to the latest code
    1. copy upgrade file to node 1
    2. chmod 777 <upgrade file>
    3. ./<upgrade file> 
  9. done


  1. Vmware Steps to move the index to the new appliance


  1. Edit the setting of the new ECA VM node 1 (Warning: Do this on the new ECA vApp VMs, do not do this step on the old ECA vApp VMs), and remove Hard Disk 2 with option “Remove from virtual machine and delete files from disk”. Example:   
  2.   
  3. Edit the setting of the old ECA VM node 1 (Now do this step on the old ECA vApp VMs) and record the Datastore Disk File location for Hard Disk 2 and then remove only from VM inventory (Warning: Only remove from VM, but do not delete files from disk). Chose: “Remove from virtual Machine”. Example: 
  4.    
  5. IMPORTANT STEP: Record the location of the VMDK disk on the datastore and record this path and for use in a later step. NOTE:  Record the full path in the data store to the disk for this VM, you will need this exact path to the vmdk disk to attach to the new appliance VM disk.   

  6. Re-add that 2nd disk from old ECA VM 1 to the correspondent new ECA VM 1. NOTE: You must use the VMDK using the location in the datastore from the step above, where the path to this 2nd disk was recorded.    Click Add  
  7.   
  8.   
  9.  Select “Use an existing virtual disk” 
  10.   
  11.  Specify the correct Disk File Path (Warning: Do not choose wrong disk) , use the path to the datastore and folder and vmdk recorded from the step above.
  12.   
  13.  Accept the default advanced options and click “Next” 
  14.   
  15.  Click “Finish” 
  16.   
  17.   
  18. IMPORTANT STEP: Repeat the 2nd disk VMDK migration from the old appliance to the new appliance VM's for all remaining ECA VM's  
  19. Mandatory Step:  Take a vmware level snapshot of all search nodes before proceeding to the next steps.  This is the only way to roll back if any issues block the upgrade. 
  20. Done 


Power on New appliance

  1. SSH to ECA node 1 as ecaadmin
  2. Ping each ip address in the cluster until each VM responds. NOTE: Do not continue if you cannot ping each VM in the cluster.
  3. From ECA node 1: ecactl cluster up
  4. Verify that new ECA can be brought up successfully
  5. Verify Search license: searchctl licenses list
  6. Verify registered PowerScale cluster: searchctl PowerScales list
  7. Verify configured folder: searchctl folders list
  8. Verify from Eyeglass Search UI https://<eca-node1-ip> that able to login and search the existing data.
  9. Add new data and once the next incremental ingestion and commit has been completed, verify from Eyeglass Search UI.
  10. Done

© Superna LLC