Administration Guides

ECA Cluster Operational Procedures

Home



Eyeglass Cluster Maintenance Operations

Note:  Restart of the OS will not auto start up the cluster post boot.  Follow steps in this section for cluster OS shutdown, restart and boot process.

        Cluster OS shutdown or restart

  1. To correctly shutdown the cluster
  2. Login as ecaadmin via ssh on the master node (Node 1)
  3. ecactl cluster down (wait until all nodes are down)
  4. Now shutdown the OS nodes from ssh login to each node
  5. ssh to each node
  6. Type sudo -s (enter admin password)
  7. Type shutdown

        Cluster Startup

  1. ssh to the master node (node 1)
  2. Login as ecaadmin user
  3. ecactl cluster up
  4. Verify boot messages shows user tables exist and signal table exists (this step verifies connection to analytics database over HDFS on startup)
  5. Verify cluster is up
  6. ecactl cluster status (verify containers and table exist in the output)
  7. Done.

ECA Cluster Node IP address Change

To correctly change  the cluster node ip addresses:

  1. Login as ecaadmin via ssh on the master node (Node 1)
  2. ecactl cluster down (wait until completely down)
  3. Sudo to root
    1. sudo -s (enter admin password)
    2. Type yast
    3. Navigate to networking to change the IP address on the interface)

  1. Each screenshot shows ip, dns, router settings
  2. Save and exit yast
  3. Repeat on all nodes in the cluster
  4. Once completed changes verify network connectivity with ping and DNS nslookup
  5. Edit  with ‘ nano /opt/superna/eca/eca-env-common.conf ’ on the master node (Node 1)
  6. Edit the ip addresses of each node to match new new settings
  7. export ECA_LOCATION_NODE_1=x.x.x.x
  8. export ECA_LOCATION_NODE_2=x.x.x.x2
  9. export ECA_LOCATION_NODE_3=x.x.x.x3
  10. Control X to exit and save
  11. Modify the Isilon NFS mount permissions on all clusters managed by the ECA instance. Replace the IP's to include all ECA node ip addresses.  Example below shows 3 IP, check your cluster node count to update the command below to match your deployment.  
    • isi nfs exports modify --id 3 -f --add-root-clients="x.x.x.x, y.y.y.y, z.z.z.z"-- add-clients="x.x.x.x,y.y.y.y,z.z.z.z"  
  12. Update the HDFS access zone with new IP addresses of the ECA VM's
    • isi hdfs rack list --zone=eyeglass
      isi hdfs rack modify igls-hdfsrack0 --zone=eyeglass --client-ip-ranges="x.x.x.x, y.y.y.y, z.z.z.z"
  13. Start cluster up. Run command: ecactl cluster exec sudo rm - rf /opt/superna/mnt/zk-ramdisk/*\&\& sudo systemctl restart docker
  14. From master node (Node 1)
  15. ecactl cluster up (verify boot messages look as expected)
  16. Eyeglass /etc/hosts file validation
  17. Once the ECA cluster is up
  18. Login to Eyeglass as admin via ssh
  19. Type cat /etc/hosts
  20. Verify the new ip address assigned to the ECA cluster is present in the hosts file.
  21. If it is not correct edit the hosts file and correct the IP addresses for each node.
  22. Login to Eyeglass and open the Manage Services window.  Verify active ECA nodes are detected as Active and Green.
  23. You should see the old ip addresses and inactive ECA nodes with the old ip addresses ,  click the red X next to each to delete these entries from the managed services icon.
  24. Done


Change ECA Management tool Authentication password

  1. Release 2.5.7 and later now protects all management tools on the ECA cluster with a user name and password over a HTTPS login page.   This includes hbase, kafka, spark UI's that are accessible from the Managed Services icon in the Eyeglass GUI.  
    1. The login to this UI is ecaadmin and default password is 3y3gl4ss
  2. Login to node 1 over ssh as ecaadmin user and run the command below
    1. NOTE: Replace <password> with the password
  3. ecactl cluster exec "htpasswd -b /opt/superna/eca/conf/nginx/.htpasswd ecaadmin <password>"
  4. done. The new password is active immediately on all nodes.


Single ECA Node Restart or Host crash Affect 1 or more ECA nodes

Use this procedure when restarting one ECA node, which under normal conditions should not be done unless directed by support.  The other use case is when a host running an ECA VM is restarted for maintenance and a node will leave the cluster and needs to rejoin.

  1. On the master node
  2. Login via ssh as ecaadmin
  3. Type command :  ecactl cluster refresh  (this command will re-integrate this node back into the cluster and check access to database tables on all nodes)
  4. Verify output
  5. Now type: ecactl db shell
  6. type : status
  7. Verify no dead servers are listed
  8. If no dead servers
  9. Login to Eyeglass GUI, check Managed Services and verify all nodes are green.
  10. Cluster node integration procedure completed.

Eyeglass ECA Cluster Monitoring Operations

Checking ECA database Status: 

  1. ecactl db shell [enter]
  2. status [enter]

Check overall Cluster Status

  1. ecactl cluster status

Check Container stats memory, cpu on an ECA node

  1. ecactl stats  (auto refreshes)


Security

Self Signed Certificate Replace for ECA cluster Nginx proxy

  1. SSH to ECA node 1
  2. Run command: cd /opt/superna/eca/conf/nginx
  3. Run command: mv nginx.crt nginx.crt.bak
  4. Run command: mv nginx.key nginx.key.bak
  5. Run command (replace the yellow domain name and the IP in the command):
    1. openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes -keyout nginx.key -days 365 -out nginx.crt -subj "/CN=st-search-1.ad2.test" -addext "subjectAltName=DNS:st-search-1.ad2.test,IP:172.25.24.224"
  6. Run command: ecactl cluster push-config
  7. Run command: ecactl cluster services restart --container nginx --all


© Superna Inc