Administration Guides

Monthly Index Backup Solution Guide


To protect the index we recommend a monthly or bi-weekly back to NFS export on the PowerScale.   This provides a recovery point of a large index stored within the Search & Recover cluster.

BACKUP: Backup Index

The BACKUP command will backup Search & Recover indexes and configurations for a specified Index. The BACKUP command takes one copy from each shard for the indexes. For configurations, it backs up the configSet that was associated with the collection and metadata. Use the following command to back up igls Search & Recover collection and associated configurations to PowerScale over NFS: /admin/collections?action=BACKUP&name=iglssearchbackup1&collection=igls&location=/opt/superna/mnt/backup/&async=task-id


  1. Create NFS export on PowerScale for this backup. Example: Create NFS export with path  "/ifs/searchindexbackup", and configure to let Search & Recover nodes to have read and write permission to this NFS export by adding the Search and Recover ip to the read/write client list on the export.
    1. ssh to the cluster as root
      1. mkdir -p /ifs/searchindexbackup 
    2. Create the user named "eyeglasshdfs" in the local system provider, no password is required when creating this user.  This user will own the files on PowerScale.
    3. Configure ownership of that NFS export path on PowerScale: chown -R eyeglasshdfs:"PowerScale Users" /ifs/searchindexbackup .
    4. Change mode of this directory: chmod -R 777 /ifs/searchindexbackup .
  2. On each of the Search & Recover cluster nodes:
    1. Mount the NFS export to the mount point on each solr node. Replace yellow with SmartConnect name. Example:  mount -t nfs -o nfsvers=3 <dns name of smartconnect>:/ifs/searchindexbackup /opt/superna/mnt/solr-backup .
    2. Repeat these steps on each node starting at node 2 to 4 or 7 depending on the Search cluster size.
    3. To ensure the NFS mount persists a reboot:
      1. Complete these steps on nodes 2 - X (X is the last node in the cluster, depending on the size of your Search & Recover cluster)
        vim /etc/fstab .
      2. Replace yellow highlight with the correct values for your cluster. NOTE: the FQDN should be a SmartConnect name for a pool in the System Access Zone IP Pool SmartConnect.
      3. FQDN:/ifs/searchindexbackup /opt/superna/mnt/solr-backup nfs ro 0 0 .
      4. Save the file.
  3. Restart the cluster to allow new mount to be visible:
    1. SSH to node 1 of the cluster as ecaadmin.
    2. ecactl cluster down (wait for this to finish).
    3. ecactl cluster up .
    4. Verify that the NFS mounted directory is in the mount list of solr container.
      1. ecactl containers exec solr mount .
  4. Execute the backup command
    1. location=/opt/superna/mnt/backup  (this is a the local location in the VM that is mounted to the PowerScale export).
    2. Task-id = 1  (any integer can be used to monitor task (will be used to check the status with REQUESTSTATUS command).
    3. Login to node 2 using ssh and ecaadmin user account.
    4. Run this command:
      1. curl ‘http://node2-IP:8983/solr/admin/collections?action=BACKUP&name=iglssearchbackup1&collection=igls&location=/opt/superna/mnt/backup/&async=1
      2. Then use this command to monitor progress:
        1. curl ‘http://node2-IP:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1
    5. Once that task has been completed, the action=REQUESTSTATUS will return the status of backup (success/failed).
    6. Note for a large index this backup can take hours.  
    7. Once completed login via ssh to the PowerScale and verify the backup directory contains files.
    8. The size of the index backup will be smaller than the index size on the cluster.

RESTORE: Restore Index

The RESTORE command will create a Index with the specified name in the collection parameter. Use the following command to restore  igls Search & Recover index and associated configurations:


The target collection should not be present at the time the API is called, as Search & Recover will create this collection. In order to restore with the same collection name, we should delete the existing collection with DELETE Command.


  1. Delete existing Index from the Collection screen in the GUI:
  2. Restore collection: 
    1. Login to node 2 of the cluster and execute this command:
    2. Example: "curl ‘http://node2-ip:8983/solr/admin/collections?action=RESTORE&name=iglssearchbackup1&collection=igls&location=/opt/superna/mnt/backup/&async=1’"
    3. Check the status of the request task for that task-id:
      1. curl ‘http://node1-IP:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=task-id
      2. NOTE: This process can take hours on a large restore
      3. Once that task has been completed, the action=REQUESTSTATUS will return the status of backup (success/failed).
    4. Use GUI to verify the collection after the restore that everything is green:
    5. Done.
    6. Verify ingestion tasks are functioning be creating new files and verify you can search for new files.
    7. Use health check process to verify ingestion and stats command to see that files are being added to the index successfully.  See Configuration section.

© Superna Inc