Eyeglass Solutions Publication2

DASM Administration Guide

Home


Administration and Feature Usage


Typical Use Cases

  1. Identify high risk hosts for rapid remediation, patching and hardening.
    1. Features needed:   The base feature that produces an AI predicted data attack surface
  2. High Security Data Risk.  Set security baseline for hosts or users to access data
    1. Features needed: Dynamic DataShield autonomous enforcement policies


Data Attack Surface Manager Main Dashboard

  1. The main dashboard shows the data attack surface.
    1. The columns indicate the contributing factors to the Data Risk Score shown the Vulnerability Level. 
    2. If Data Classification is enabled. The PII score rates the level of PII data that was detected from this host and user combination.    
    3. OS Type - The OS of the client machine indicates what OS is accessing the storage.
    4. The Access list shows the number of SMB or NFS exports the host has access to by user or by ip address
    5. The list of SMB/NFS shares that have been accessed in the last 7 days shows the usage view along side the permissions view.  This provides a data overexposed view of data that allows remediation to focus on users and hosts where a lot of permissions exist with very little usage.
    6. Open ports indicates the listening ports on the client that could be used by an attacker to comprimise the host.
    7. The Anomaly column indicates if the user behavior profile indicates any abnormal deviations of reads or writes to data that would indicate suspicious user behavior.
    8. The CVE score is an average of all CVE found in the scan results.
    9. The CVE description shows a list of some of the CVE's found on the host.
    10. The CVE last scan indicates how current the scan results are when used to predict the Data Risk Score
  2.  


Data Attack Surface Manager Offensive Actions

  1. Offensive Remediation:
    1. DVM supports remediation for high risk hosts and users with host level blocking.  This allows high risk hosts to be blocked from accessing data until remediation can be completed and unblocking allows the host to access data.
    2.  
  2. Risk Discovery
    1. Identify Data Attack Surface hosts with no vulnerability scan results exposing data and user accounts to breach from zero day or malicious actor that has joined a machine to the network.   
    2.   
    3. SOC Analysts can chose to block hosts without a completed vulnerability scan assessment.
  3. Autonomous Data Shield (DDS) Policies 
    1. The DDS policies allows creation of policies that automate the remediation of high risk data score hosts and users from accessing data until minimum security assessment is complete or the DRS (Data Risk Score) drops below a specific threshold.
      1. Create policy will require the following fields (Matching criteria will use AND logic for all 3 fields to match)
        1. name - text field - describes the policy purpose.
        2. source host IP - IP address range - x.x.x.x/yy syntax to identify source computers by network, default “*” which means any source ip
        3. Data Risk Score (mandatory field) - drop down of risk scores => greater and equal to. example medium means medium and above DRS (Data Risk Score)
        4. Path - absolute path , input will be absolute path example /ifs/marketing (which will be treated as /ifs/marketing/* all paths under the marketing folder. NOTE: we will ignore device name for now so this means a path can apply to any device under management, the risk of path overlap is low
          1. default will be any path
        5. Action
          1. Once a policy is created DDS monitors incoming audit data to match criteria and enforce host lockouts.
          2. Unblock logic - DDS will reassess on an hourly basis, if the AI model  assessment of the DRS (Data Risk Score) indicates the the threat level has dropped equal to or Less than < DRS score, the host level unblock logic is executed and a webhook alert is sent.
          3. Host lockouts will update the policy dashboard to display status.
          4. The main dashboard in DVM  will show column for policies in the dashboard under the  “Autonomous data protection policy name”.
          5. Alerting
            1. Each time a policy acts with a lockout response. The following will occur happen:
              1. A webhook using the superna Zero Trust payload schema will be sent so that Incident Response teams will be informed that autonomous security policies have been activated or deactivated.

Dynamic DataShield Policy Operations

  1. When DDS policies activate the dashboard will show locked status and the policy name that triggered the lockout on the host

  2. Each hour the Superna Data AI inferencing runs to re-evaluate the Data Risk Score of the attack surface including any hosts that a lockout has been applied.  If the DRS score falls below the policy definition the lockout is removed.


Data Classification - How it Works

  1. Overview: This feature adds valuable data to the Superna AI model to influence the Data Risk Score based on the type of data a user is accessing.    The classification can can detect and classify using NLP (Natural Language Processing) and pattern-matching techniques. The methods of classification support the following methods.   
    1. Pretrained spaCy models (for names, locations, etc.)
    2. Regex patterns (e.g., SSNs, credit card numbers)

    3. Checksum validation (for entities like credit cards and IBANs)

  2. How it works

    1. Users that are detected on high risk data attack surface hosts have their most recent activity sampled with a default of 10% of the files touched by a user. The results are leveraged to compute a PII score between 0 to 10 with 10 being the highest and this influences the AI model predictions.

    2. The current solution uses snapshots on file systems that are NFS mounted to read any data over a read only NFS mount.  The snapshot is deleted after the processing of user files is completed.   This ensures the snapshot will not consume space when it is not required.

  3. How To Configure Classification Modes

    1. Edit the environment variable called CLASSIFICATION_MODE

    2. set the variable on the host export CLASSIFICATION_MODE= x

    3. Where is x  is 1 or 2 or 3

    4. 1 = regex classifiers only

    5. 2 = Mode 2: spaCy with en-core-web-sm (spaCy small model)

      1. person entity classifiers

    6. 3 = Mode 3: spaCy with en-core-web-lg (spaCy large model)

      1. person entity classifiers




Data Classification Categories

Identity & Demographics (6)

Entity TypeDescription
PERSONFull names or identifiable persons
AGEAge mentions
GENDERGender expressions
DATE_TIMEDate and time values
US_PASSPORTU.S. passport numbers
US_DRIVER_LICENSEU.S. driver’s license numbers

 


Sensitive Numbers & Codes (7)

CategoryCount
Identity & Demographics6
Sensitive Numbers & Codes7
Digital Identifiers & Network6
Location-Based Identifiers2
Professional & Organizational2
National IDs (Non-US)1
Total24

 

Digital Identifiers & Network Data (6)

Entity TypeDescription
IP_ADDRESSIPv4 or IPv6 addresses
EMAIL_ADDRESSEmail addresses
URLWeb URLs
DOMAIN_NAMEDomain names
AWS_ACCESS_KEYAmazon Web Services access key
AZURE_STORAGE_KEYMicrosoft Azure storage keys

 

Location-Based Identifiers (2)

Entity TypeDescription
LOCATIONCities, countries, etc.
US_ZIP_CODEU.S. postal codes

 

Professional & Organizational (2)

Entity TypeDescription
ORGANIZATIONCompany or org names
MEDICAL_LICENSEMedical license identifiers

 National IDs (Non-US) (1)

Entity TypeDescription
NRPSingapore National Registration ID

 

Summary Table

CategoryCount
Identity & Demographics6
Sensitive Numbers & Codes7
Digital Identifiers & Network6
Location-Based Identifiers2
Professional & Organizational2
National IDs (Non-US)1
Total24

Data Classification Exposure Analytics

This reporting dashboard offers insight to your PII exposure with 2 different dashboards that summarize where your risks lie, which users and computers are connected to this risk.


SMB/NFS PII Exposure Detection

This dashboard shows users that create PII content and which host they use , along with a summary of the types of PII found for this users activity, along with a score between 1 - 10 , where 10 represents the a high level of PII exposure.  The Data Risk Score for the host/user is also shown to help prioritize remediation on the hosts and users found on this dashboard. 

How this capability helps reduce risk

  1. Pinpoint Risk Hotspots
    1. Discover where sensitive data is most vulnerable by mapping actual user interactions with PII across SMB shares and NFS exports. Prioritize protection efforts based on real usage, not assumptions.
  2. Quantify Exfiltration Risk in Real Terms
    1. Move beyond generic risk scoring by measuring the true threat: who accessed what, when, and how much sensitive data was involved. Turn unstructured data sprawl into a targeted risk profile.
  3. Enable Targeted Remediation
    1. Focus remediation efforts on shares or exports with high PII concentration and high user interaction, reducing false positives and maximizing the impact of security operations.
  4. Bridge Data Security with Compliance Monitoring
    1. Deliver auditable insights into how and where sensitive data is exposed—empowering compliance teams with contextual, actionable evidence tied to user behavior.


This dashboard provides an SMB/NFS view of where PII data is concentrated and shows the number of users and hosts involved with PII data. 




Clicking the user or client number will drill in to show the users or clients that touched PII data on this SMB share in the last 7 days.



Permission vs. Usage Over-Exposure Analysis

This dashboard monitors user activity and expands share level permissions to build a ratio of access vs permissions.  In order to reduce risk and data exposure, the least privilege model requires users that do not access data to have it removed from the ACL or share level permissions to shrink the Data Attack Surface.


How This Capability Helps Reduce Risk

  1. Shrink the Data Attack Surface with Precision
    1. Identify users who have access to data they don’t use. Reduce risk exposure from dormant or excessive permissions and enforce least-privilege access policies intelligently.
  2. Operationalize Zero Trust at the File System Layer
    1. Go beyond static access control lists—use real-world activity to justify or revoke access. DASM aligns with Zero Trust principles by validating actual need-to-know.
  3. Turn Audit Logs into Proactive Access Governance
    1. Convert audit trails into actionable access intelligence. Automate the detection of access drift and help teams surgically close privilege gaps before they’re exploited.
  4. Drive Access Reviews with Usage Context
    1. Equip IT and security teams with usage-based insights that make access reviews faster, smarter, and more defensible—especially in regulated environments.


 

Operations

Local Login

  1. Open a new Browser tab and point to this URL https://<ip-address-of-dvm-vm>:5001/. It will open the initial login page
    1. The default user login is dasmadmin default password Vuln3r@b1l1ty!

Active Directory Login

  1. The steps below configure AD authentication with Role based group support.  This allows an AD group to control which users can login to the Data Attack Surface Manager console.
    1. Edit the file below and enter the AD service account, key for session cookie , AD domain and remaining fields.
    2. AD Authentication Configuration

      We need to update this config file cvm_config.py with the variables based on our environment



      # cvm_config.py

      # Configuration for DASM WebUI Login Authentication application


      # ---------------------------

      # Flask Application Settings

      # ---------------------------

      # A long, random secret key used by Flask to sign session cookies,

      # CSRF tokens, and any other data stored client-side.

      # Generate once (e.g. using Python's secrets.token_urlsafe) and keep it secret.

      SECRET_KEY = "YOUR_FLASK_SECRET_KEY_HERE"


      # -----------------------

      # Local Admin Credentials

      # -----------------------

      # A simple in-memory store of local usernames and passwords for quick admin access.

      # Format: 'username': {'password': 'plain_text_password'}

      # Replace with secure values The default is given here.

      LOCAL_USERS = {

          'dasmadmin': {'password': 'Vuln3r$b1l1ty!'}

      }


      # ---------------------------

      # Active Directory (AD/LDAP)

      # ---------------------------

      # The hostname or IP of your AD server.

      AD_HOST = "ad.example.com"


      # Whether to use LDAPS (True for port 636 implicit SSL) or StartTLS on port 389.

      AD_USE_SSL = True


      # Port for the LDAP connection (636 for LDAPS, 389 for StartTLS).

      AD_PORT = 636


      # The NetBIOS domain name (used for NTLM binds as DOMAIN\\user).

      AD_DOMAIN = "EXAMPLE"


      # Whether to require certificate validation when using LDAPS/StartTLS.

      # Set False for dev (self-signed certs), True in production.

      AD_VALIDATE_CERT = True


      # Path to your CA bundle (PEM file) to verify AD server certificate.

      AD_CA_BUNDLE = "/path/to/ca_chain.pem"


      # ---------------------------

      # Service Account for AD Lookup

      # ---------------------------

      # A dedicated AD account allowed to search users and groups.

      # Format: r"DOMAIN\username"

      AD_SERVICE_ACCOUNT = r"EXAMPLE\ldap-reader"


      # Password for the service account. Keep secret!

      AD_SERVICE_PASSWORD = "service_account_password"


      # ---------------------------

      # LDAP Search Bases

      # ---------------------------

      # Base DN for user searches (e.g. the root of your domain).

      AD_BASE_DN = "DC=example,DC=com"


      # Distinguished Name of the AD group whose members are allowed

      # to log in to the WebUI. Only users in this group will be granted access.

      AD_ALLOWED_GROUP_DN = "CN=WebUI Users,OU=Security Groups,DC=example,DC=com"


  2. Login with domain\user or user@domain syntax

2FA with Google Authenticator

  1. This solution will support login to a local user account or AD account + the addition of the One time Password provided by the Google authenticator application.  The login requires setup of the key pair for the user account with Google Authenticator 
    1.  



Alert and Webhook Configuration with Data Security Edition

  1. To configure DVM to send webhooks to Eyeglass to leverage the integrations listed here. The configuration steps below can be followed.  It is also possible to send webhooks directly to another endpoint assuming the endpoint can parse the payload.

Login to the DVM vm over ssh


  1. nano /mnt/ml_data/ml-cvm/cvm_webui_rapid7.py
  2. Locate the section below and set the ip address to the Eyeglass VM ip address, if different than the DVM VM IP address. 
  3. DVM_WEBHOOK_IP = "x.x.x.x" # Specify DVM Webhook Endpoint IP Address
    DVM_WEBHOOK_PORT = "5000" # Specify DVM Webhook Endpoint Port Number
    DVM_WEBHOOK_ENDPOINT = "/webhook" # Specify DVM Webhook Endpoint route (e.g., "/webhook")
  4. Create an Eyeglass VM webhook integration to receive the DVM webhook alerts on port 5000 (of the value used if changed), leave the endpoint route /webhook unless directed to change by support.
  5. Configure the integrations from the link above to configure which integration is used to send DVM alerts.
  6. NOTE: If Eyeglass integrations already exist you will need to change the port that DVM uses to send webhooks.
  7. control + x to save and exit.
  8. Done.

Sample DVM webhook Json payload

  1. The example payloads below 


    1. Manual Locked out
    2. Manual UnLock
    3. Auto Locked out with Containment Remediation Enabled
    4. Auto Locked out with Containment Remediation Disabled
    5. Auto Unlock with Containment Remediation Enabled
    6. Auto Unlock with Containment Remediation Disabled
  2. The payload that is application specific is embedded in the extraparams section the zero trust payload



Software Start/Stop

  1. To start / restart DVM processes, we can use the following:
    1. cd /mnt/ml_data/ml-cvm
    2. ./cvm_check_restart.sh
  2. To stop DVM processes, we can use the following:
    1. cd /mnt/ml_data/ml-cvm​
    2. ./cvm_stop_processes.sh

Log Gather For Support

  1. cd /mnt/ml_data/ml-cvm​
  2. python3 cvm_loggather.py
  3. This command will generate a zip file that can be uploaded to support cases


Data Classification Configuration

Data Processing

  1. Files are processed for each user on an hourly basis and the training model is built with this new classification risk data included in the data set.
  2. Only files < 500 MB will be processed and files on the exclusion list will be skipped.

Sensitive Tuning & file type filtering

  1. The data classification function will only register a file with a confidence level of greater than 70%.  This can be tuned higher to reduce false postivie on low grade classification detections within files.
  2. The data classification feature supports over 1500 file types.    No image file types are supported and are filtered out by default.   The list of file types that will be ignored can be customized.
    1. This file can be edited to add additional file extensions the be ignored /mnt/ml_data/ml-cvm/cvm_ext_config.json .  You can add * to wild card part of the extension types.  The default exclusions are listed below.

© Superna Inc