DASM Administration Guide
- Administration and Feature Usage
- Typical Use Cases
- Data Attack Surface Manager Main Dashboard
- Data Attack Surface Manager Offensive Actions
- Dynamic DataShield Policy Operations
- Data Classification - How it Works
- Data Classification Categories
- Operations
- Local Login
- Active Directory Login
- 2FA with Google Authenticator
- Alert and Webhook Configuration with Data Security Edition
- Software Start/Stop
- Log Gather For Support
Administration and Feature Usage
Typical Use Cases
- Identify high risk hosts for rapid remediation, patching and hardening.
- Features needed: The base feature that produces an AI predicted data attack surface
- High Security Data Risk. Set security baseline for hosts or users to access data
- Features needed: Dynamic DataShield autonomous enforcement policies
Data Attack Surface Manager Main Dashboard
- The main dashboard shows the data attack surface.
- The columns indicate the contributing factors to the Data Risk Score shown the Vulnerability Level.
- If Data Classification is enabled. The PII score rates the level of PII data that was detected from this host and user combination.
- The Access list shows the number of SMB or NFS exports the host has access to by user or by ip address
- Open ports indicates the listening ports on the client that could be used by an attacker to comprimise the host.
- The Anomaly column indicates if the user behavior profile indicates any abnormal deviations of reads or writes to data that would indicate suspicious user behavior.
- The CVE score is an average of all CVE found in the scan results.
- The CVE description shows a list of some of the CVE's found on the host.
- The CVE last scan indicates how current the scan results are when used to predict the Data Risk Score
Data Attack Surface Manager Offensive Actions
- Offensive Remediation:
- DVM supports remediation for high risk hosts and users with host level blocking. This allows high risk hosts to be blocked from accessing data until remediation can be completed and unblocking allows the host to access data.
- Risk Discovery
- Identify Data Attack Surface hosts with no vulnerability scan results exposing data and user accounts to breach from zero day or malicious actor that has joined a machine to the network.
- SOC Analysts can chose to block hosts without a completed vulnerability scan assessment.
- Autonomous Data Shield (DDS) Policies
- The DDS policies allows creation of policies that automate the remediation of high risk data score hosts and users from accessing data until minimum security assessment is complete or the DRS (Data Risk Score) drops below a specific threshold.
- Create policy will require the following fields (Matching criteria will use AND logic for all 3 fields to match)
- name - text field - describes the policy purpose.
- source host IP - IP address range - x.x.x.x/yy syntax to identify source computers by network, default “*” which means any source ip
- Data Risk Score (mandatory field) - drop down of risk scores => greater and equal to. example medium means medium and above DRS (Data Risk Score)
- Path - absolute path , input will be absolute path example /ifs/marketing (which will be treated as /ifs/marketing/* all paths under the marketing folder. NOTE: we will ignore device name for now so this means a path can apply to any device under management, the risk of path overlap is low
- default will be any path
- Action
- Once a policy is created DDS monitors incoming audit data to match criteria and enforce host lockouts.
- Unblock logic - DDS will reassess on an hourly basis, if the AI model assessment of the DRS (Data Risk Score) indicates the the threat level has dropped equal to or Less than < DRS score, the host level unblock logic is executed and a webhook alert is sent.
- Host lockouts will update the policy dashboard to display status.
- The main dashboard in DVM will show column for policies in the dashboard under the “Autonomous data protection policy name”.
- Alerting
- Each time a policy acts with a lockout response. The following will occur happen:
- A webhook using the superna Zero Trust payload schema will be sent so that Incident Response teams will be informed that autonomous security policies have been activated or deactivated.
- Each time a policy acts with a lockout response. The following will occur happen:
- Create policy will require the following fields (Matching criteria will use AND logic for all 3 fields to match)
Dynamic DataShield Policy Operations
- When DDS policies activate the dashboard will show locked status and the policy name that triggered the lockout on the host
- Each hour the Superna Data AI inferencing runs to re-evaluate the Data Risk Score of the attack surface including any hosts that a lockout has been applied. If the DRS score falls below the policy definition the lockout is removed.
Data Classification - How it Works
- Overview: This feature adds valuable data to the Superna AI model to influence the Data Risk Score based on the type of data a user is accessing. The classification can can detect and classify using NLP (Natural Language Processing) and pattern-matching techniques. The methods of classification support the following methods.
- Pretrained spaCy models (for names, locations, etc.)
Regex patterns (e.g., SSNs, credit card numbers)
Checksum validation (for entities like credit cards and IBANs)
How it works
Users that are detected on high risk data attack surface hosts have their most recent activity sampled with a default of 10% of the files touched by a user. The results are leveraged to compute a PII score between 0 to 10 with 10 being the highest and this influences the AI model predictions.
The current solution uses snapshots on file systems that are NFS mounted to read any data over a read only NFS mount. The snapshot is deleted after the processing of user files is completed. This ensures the snapshot will not consume space when it is not required.
Data Classification Categories
Identity & Demographics (6)
Entity Type | Description |
PERSON | Full names or identifiable persons |
AGE | Age mentions |
GENDER | Gender expressions |
DATE_TIME | Date and time values |
US_PASSPORT | U.S. passport numbers |
US_DRIVER_LICENSE | U.S. driver’s license numbers |
Sensitive Numbers & Codes (7)
Category | Count |
Identity & Demographics | 6 |
Sensitive Numbers & Codes | 7 |
Digital Identifiers & Network | 6 |
Location-Based Identifiers | 2 |
Professional & Organizational | 2 |
National IDs (Non-US) | 1 |
Total | 24 |
Digital Identifiers & Network Data (6)
Entity Type | Description |
IP_ADDRESS | IPv4 or IPv6 addresses |
EMAIL_ADDRESS | Email addresses |
URL | Web URLs |
DOMAIN_NAME | Domain names |
AWS_ACCESS_KEY | Amazon Web Services access key |
AZURE_STORAGE_KEY | Microsoft Azure storage keys |
Location-Based Identifiers (2)
Entity Type | Description |
LOCATION | Cities, countries, etc. |
US_ZIP_CODE | U.S. postal codes |
Professional & Organizational (2)
Entity Type | Description |
ORGANIZATION | Company or org names |
MEDICAL_LICENSE | Medical license identifiers |
National IDs (Non-US) (1)
Entity Type | Description |
NRP | Singapore National Registration ID |
Summary Table
Category | Count |
Identity & Demographics | 6 |
Sensitive Numbers & Codes | 7 |
Digital Identifiers & Network | 6 |
Location-Based Identifiers | 2 |
Professional & Organizational | 2 |
National IDs (Non-US) | 1 |
Total | 24 |
Operations
Local Login
- Open a new Browser tab and point to this URL https://<ip-address-of-dvm-vm>:5001/. It will open the initial login page
- The default user login is dasmadmin default password Vuln3r@b1l1ty!
Active Directory Login
- This requires the OS to be joined to Active Directory with the steps below:
- Login to the OS over ssh as admin user with default password 3y3gl4ss
- sudo -s
- enter admin password
- sudo yast2 samba-client
- Complete the dialog box to authenticate and join your AD domain
- Once completed successfully the login page will support user@domain syntax AD login
2FA with Google Authenticator
- This solution will support login to a local user account or AD account + the addition of the One time Password provided by the Google authenticator application.
Alert and Webhook Configuration with Data Security Edition
- To configure DVM to send webhooks to Eyeglass to leverage the integrations listed here. The configuration steps below can be followed. It is also possible to send webhooks directly to another endpoint assuming the endpoint can parse the payload.
Login to the DVM vm over ssh
- nano /mnt/ml_data/ml-cvm/cvm_webui_rapid7.py
- Locate the section below and set the ip address to the Eyeglass VM ip address, if different than the DVM VM IP address.
- DVM_WEBHOOK_IP = "x.x.x.x" # Specify DVM Webhook Endpoint IP AddressDVM_WEBHOOK_PORT = "5000" # Specify DVM Webhook Endpoint Port NumberDVM_WEBHOOK_ENDPOINT = "/webhook" # Specify DVM Webhook Endpoint route (e.g., "/webhook")
- Create an Eyeglass VM webhook integration to receive the DVM webhook alerts on port 5000 (of the value used if changed), leave the endpoint route /webhook unless directed to change by support.
- Configure the integrations from the link above to configure which integration is used to send DVM alerts.
- NOTE: If Eyeglass integrations already exist you will need to change the port that DVM uses to send webhooks.
- control + x to save and exit.
- Done.
Sample DVM webhook Json payload
- The example payloads below
The payload that is application specific is embedded in the extraparams section the zero trust payload
Software Start/Stop
- To start / restart DVM processes, we can use the following:
- cd /mnt/ml_data/ml-cvm
- ./cvm_check_restart.sh
- To stop DVM processes, we can use the following:
- cd /mnt/ml_data/ml-cvm
- ./cvm_stop_processes.sh
Log Gather For Support
- cd /mnt/ml_data/ml-cvm
- python3 cvm_loggather.py
- This command will generate a zip file that can be uploaded to support cases