Administration Guides

Ransomware Defender for AWS Admin Guide

Home

 

Overview

Ransomware Defender for AWS has some unique administration requirements that is separate to the on premise deployment.

How to Configure Real time Security triggers

Overview

The Defender for AWS product merges Easy Auditor features into the data protection solution.   This allows the following data protection features:

  1. Data Loss Prevention - monitor S3 buckets for high rate of reads indicated data is being read at high rate from the bucket.  This feature calculates the bucket capacity and measures % of data read over y minutes.
  2. Mass Delete - monitors high rate of object deletes on a bucket indicated a suspicious IO pattern on a bucket that should not have a high rate of deletes.
  3. Customer Security Triggers - This allows powerful field baed rules with both AND and OR logic that can combine fields with equals, contains or less than or greater than logic to build powerful real time monitoring.   Examples in this guide.
    1. Untrusted network access
    2. Untrusted user access
    3. Object delete on read only data
    4. The custom real time triggers allows advanced AND OR combinations and listing of multiple buckets to protect and monitor your data.   A common use case is to monitor buckets from application servers that use them and alert when any IO outside the application server subnets is processed.    This can also be done by monitoring IAM users with expected IO and flag or alert when IO other than the application appears in the bucket(s).
  4. Cyber Recover Manager Support


How to configure Data Loss Prevention Trigger

  1. Open Ransomware Defender Icon
  2. Click Active Auditor

  3. Click Data loss prevention Configure
  4. Click new trigger
  5.  
  6. Click Directory and select a region and S3 bucket and optionally add a prefix

  7. Enter the % of the data that will trip the detector and enter the number of minutes over which the % of data is measured
  8. Click save
  9. Repeat to protect more buckets

How to configure Mass Delete Trigger

  1. Open Ransomware Defender Icon
  2. Click Active Auditor
  3. Click Mass Delete Configure
  4. Click new trigger
  5.  
  6. Click Directory and select a region and S3 bucket and optionally add a prefix

  7. Select the number of objects that must be deleted over x minutes to trip the detector
  8. Click save
  9. Repeat for to protect other buckets

How to configure Untrusted network access Trigger

  1. Open Ransomware Defender Icon
  2. Click Active Auditor
  3. Click Custom Real-time policy Configure button
  4. Click New Response
  5. Provide a name for the trigger.  NOTE:  emails and syslog messages will contain this name and can be used to setup additional triggers in SIEM or SOAR products
  6.   
  7. Click View/Edit Audit Criteria
  8. Click Add rule
  9.  
  10. Fill in the rule to match below by selecting a region/bucket and then add another rule for source ip not in xxxx and specify the network that you expect IO to enter the bucket from 

  11. Click save and save agin to save the trigger  
  12. This trigger will now alert if any IO enters the bucket from an untrusted network
  13. NOTE:  You can use the OR logic to specify a list of buckets.

How to configure Untrusted user access trigger

  1. Open Ransomware Defender Icon
  2. Click Active Auditor

  3. Click Custom Real-time policy Configure button
  4. Click New Response
  5. Provide a name for the trigger.  NOTE:  emails and syslog messages will contain this name and can be used to setup additional triggers in SIEM or SOAR products
  6.   
  7. Click View/Edit Audit Criteria
  8. Click Add rule and complete as shown below.
  9.   

  10. Enter the IAM user name that should be using the bucket.
  11. This trigger will alert when another users does any IO in the bucket.
  12. Repeat to protect other buckets.


How to configure object delete trigger

  1. Open Ransomware Defender Icon
  2. Click Active Auditor
  3. Click Custom Real-time policy Configure button
  4. Click New Response
  5. Provide a name for the trigger.  NOTE:  emails and syslog messages will contain this name and can be used to setup additional triggers in SIEM or SOAR products
  6.   
  7. Click View/Edit Audit Criteria
  8. Click Add rule and complete as shown below.
  9.   

  10. This rule will trip when a delete occurs in the bucket and will send an alert.
  11. Done.


How to configure Security Guard Feature

  1. Requirements:
    1. an IAM account
    2. S3 bucket for Security guard in the Region where you have S3 buckets to be protected
  2. Login to the S3 landing page
    1. Create an S3 bucket in the region where other S3 buckets exist for data protection
    2. The S3 bucket will be assigned to the IAM user in the next step, the bucket does not require ACL security and does not need to be public.
    3. Recommended bucket name:  ransomware-defender-security-guard
  3. Login to IAM landing page
    1. Create a new user with access and secret keys to be the service account used for the Security guard self test feature.
    2. The user does not need to be in any special groups but needs to be assigned to a security guard S3 bucket as owner to be able to create objects
    3. Retain the access and secret keys after creating the IAM user.
  4. Login to the Eyeglass VM
    1. Open the Ransomware Defender Icon
    2. Click the security guard tab
    3. Click Enable Task and select the Network Element AWS and interval should be 1D (daily)
      1.  
    4. Now scroll down to the bottom of the UI to the AWS Setup section
      1. Requirements:
        1. Security Guard service account IAM user name
        2. Access and secret keys for the security guard IAM user
        3. The region where the S3 bucket is located
      2.  
      3. Click the Submit button when done and the inputs will be validated
      4. If successful click the Run now button.
      5. Open the Jobs Icon and monitor the job steps from the running Jobs tab.
    5. Done


How to manually unlock a user account in IAM

  1. If you need to manually unlock a user account that was locked out by Ransomware Defender follow these steps.
  2. Open the IAM landing page
  3. Click the users option on the left menu
  4. Select the user and click on the Security Credentials tab
  5. You will see one or all access key's and the status will show Inactive when locked out.
  6. Click the Make Active link to unlock the user to regain access to S3 buckets
  7.  


How to monitor audit event performance and back log in AWS Console

  1. Ransomware Defender has audit event monitoring and alarming features in the Managed Services Icon.  It may be desirable to see audit data performance from within the AWS Console SQS landing page.
  2. Open the SQS landing page
    1. Select the region that has S3 buckets under protection
    2.   
    3. The first quick check is the messages available column.  It should be zero which means Ransomware Defender is processing messages as fast as they are published to the queue.
    4. Click on the SQS topic for more detail in this example click on superna-ransomware-defender-notifications-queue.  This queue typically has a higher rate of messages
    5. Click on the Monitoring tab, you can see the rate of message received, and age of oldest message to get detail performance statistics over time and the ability to add to a dashboard.

    6.  done


How to Plan and Configure DR solution for CloudFormation Stack

  1. Overview
    1. The solution uses AWS services like Cloudtrails, SQS queues ,Eventbridge and Managed Kafka cluster service are defined with per region rules and are highly available services.   The Managed Kafka Service uses HA cluster to process events and allows scaling up or down.   The Cloudformation stack runs in a region and offers in region HA solution with an autoscaling group for event processing.   In the event that a region's EC2 service was impacted.  The following steps can be used to deploy the Cloudformation stack in a different region.
    2. RTO is 1 hour to deploy and configure the Stack in a new region and connect the Stack back to the HA services above that are used by Ransomware Defender for AWS.
  2. Deploy a new Cloudstack in another AWS Region
    1. Use the instructions here (Installation Procedures CloudFormation Deployment of Ransomware Defender Stack ) to complete this step.
      1. NOTE: You will need to create a new key pair in the new region and adjust the CLI command to deploy the Cloud stack in the new region.  
      2. Time to complete approximately 45 minutes (this is Cloudformation automation time with no customer steps to execute while waiting for the Stack to deploy.)
    2. Validate the Stack deployment with instructions here (Installation Procedures - Ransomware Defender Instance and CloudFormation Stack Validation).
      1. Time to complete approximately 5 minutes
    3. Restore Backup
      1. Time to complete approximately 10 minutes.
      2. Download the latest 7 day backup from the configured S3 backup bucket configured in the Eyeglass GUI.  7 daily backups are stored external to the Eyeglass appliance instance in an S3 bucket.
      3. Follow the appliance backup restore steps in this guide 
    4. Login to Eyeglass GUI with the new IP address in the new region.  (See Stack validation step above to get the new Eyeglass appliance IP address, make sure the firewall subnet range used in the Cloud stack deployment includes your administration PC ip range)
    5. Open the Managed Services Icon - verify the services show green
    6. Open the inventory icon - verify all previously entered AWS regions that were protected are showing in the inventory tree.
    7. Open Ransomware Defender Icon
      1. Select the Security Guard tab
      2. Click run now to run the self test operations and monitor from the running jobs Icon --> running tabs.
    8. DR failover complete 


How to Configure Eyeglass Automated Backup to AWS S3 Bucket

  1. In order to export the 7 day backup created daily to an S3 bucket follow these steps.  This guide assumes AWS S3 is used.  Only tested with AWS S3.  All other S3 targets are not supported.  Reference FAQ.
  2. Requirements:
    1. Internet Access to the AWS S3
    2. Internet access to opensuse repository
    3. an S3 bucket: recommended name eyeglass-backups
    4. get access keys for a user with access to write to the bucket and delete
  3. Login to Eyeglass over ssh
    1. sudo -s (enter admin password)
  4. Install fuse s3 file system 
    1. zypper install s3fs
    2. answer yes to install the packages
  5. Create the password file with access  keys
    1. echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/passwd-s3fs
    2. NOTE replace access and secret with correct values
    3. chmod 640 /etc/passwd-s3fs
  6. Configure the backup folder
    1. rename current backup folder
    2.  mv /opt/data/superna/var/backup/ /opt/data/superna/var/backup.bak
    3. mkdir -p /opt/data/superna/var/backup 
    4. chown sca:users /opt/data/superna/var/backup 
  7. Test the mount to AWS S3 bucket
    1. s3fs eyeglass-backups /opt/data/superna/var/backup 
      1. If it fails use this command to debug the reason whey
        1. s3fs eyeglass-backups /opt/data/superna/var/backup -o dbglevel=info -f -o curldbg
    2. no response will be returned if it is successful
    3. test the mount
      1. touch /opt/data/superna/var/backup 
      2. This command should succeed to create a test file.  now test deletes
      3. rm /opt/data/superna/var/backup/test
  8. Copy previous backups to the new S3 mounted path (and maintain owner and group of the files).  This step copies current backups to the S3 bucket.
    1. cp -rp /opt/data/superna/var/backup.bak/* /opt/data/superna/var/backup 
  9. Configure fstab to mount at boot time
    1. nano /etc/fstab
    2. Paste this to a new line (assume the s3 bucket name is eyeglass-backups)
      1. eyeglass-backups /opt/data/superna/var/backup fuse.s3fs _netdev,allow_other 0 0
      2. control+x  (to save and exit)
  10. Done.  The daily backup script will now place backup files in the eyeglass-backups bucket in S3 and manage 7 files.
  11. NOTE: To adapt these steps to use an S3 compatibly target other than AWS change the mount syntax to the following below. 
    1. s3fs eyeglass-backups /opt/data/superna/var/backup -o passwd_file=/etc/passwd-s3fs -o url=https://url.to.s3/ -o use_path_request_style

How To Enable Centralized AD authentication

  1. Overview:  To enable Centralized AD authentication it is typical to have an AD domain or domain controller in AWS synced or joined to an on premise AD forest.  This assumes an AWS AD domain is available within AWS account.
  2. Login to the eyeglass VM with SSH keys created with the Cloud stack.
  3. sudo - (enter the admin password)
  4. zypper install samba-winbind
    1. Type yast
    2. Navigate to Windows Domain Membership

    1.  Enter your AD domain and select options as per the screenshot below
    2.  
    3. Enter domain admin or an account that allows computers to be joined to the domain.
    4. Once joined successfully, exit YAST using tab to select quit.   
    5. Test ssh login with AD
    6. example login syntax for an AD domain called ad2.test with user demo1.  NOTE: the double slash is required to escape slash character
    7. Linux ssh client login ssh -i <key file> ad2.test\\demo1@x.x.x.x OR ssh -i <key file> demo1@ad2.test@x.x.x.x
    8. done


How to login to Active Directory

  1. Prerequisites
    1. Complete the Join AD steps in the section above.
  2. Open the webUI
    1. Login syntax is user@domain.com  (NOTE: This can the domain hosted in AWS or a trusted domain of the AWS hosted AD)
  3. Done

How to enable AD user and group collection for Role Based Access Control

  1. In order to support role based access controls we recommend integration with AD using LDAP caching feature.  This uses an AD service account with read permissions to the directory to collect users and groups for use with RBAC feature.
  2. Configure LDAP AD user and group collection is covered in this guide.
  3. Once completed you can now follow the RBAC guide here.


© Superna Inc