MITRE ATT&CK: T1562.008 — Impair Defenses: Disable or Modify Cloud Logs
Tactic: TA0005 — Defense Evasion
BlackCat Function: Disable-DiagnosticSetting
Azure Monitor diagnostic settings control the flow of security telemetry from Azure resources to monitoring destinations. Every audit event, data-plane operation, and authentication decision is captured through these settings and routed to Log Analytics workspaces, Storage Accounts, or Event Hubs where SIEM platforms like Microsoft Sentinel consume them.
This creates an architectural dependency: if the diagnostic setting is compromised, the entire detection pipeline downstream goes blind. Unlike disabling a SIEM rule (which still collects data), impairing a diagnostic setting stops telemetry at the source. No data is generated, no alerts fire, and no forensic evidence is created.
This article examines the attack surface exposed by Azure's diagnostic settings architecture, the techniques available to exploit it, and the detection gaps that make this one of the most effective defense evasion methods in Azure environments.
Azure resources produce two types of telemetry:
Logs — Discrete security-relevant events:
AuditEvent— Administrative and data-plane operations on Key VaultsStorageRead,StorageWrite,StorageDelete— Blob and queue operationsSQLSecurityAuditEvents— Database access and query patternsSignInLogs,AuditLogs— Entra ID authentication events
Metrics — Continuous performance counters:
AllMetrics/Transaction— Request counts, latency, availability- Used by dashboards, autoscale rules, and health alerts
These two categories serve fundamentally different security functions. Logs answer who did what and when. Metrics answer is the system healthy. An attacker who disables logs while preserving metrics eliminates forensic capability without triggering operational alerts.
Each diagnostic setting is a JSON object that binds three things together:
- Categories — Which log and metric types to collect
- Destinations — Where to send them (Log Analytics, Storage, Event Hub, Partner)
- Retention — How long to keep them (deprecated but still present in API)
This creates a single point of failure. One PUT request to the Azure Resource Manager API can silently alter what telemetry is collected, where it goes, or both.
A critical architectural detail: log categories and metric categories are independently controlled within the same diagnostic setting. Each has its own enabled boolean:
{
"logs": [{ "category": "AuditEvent", "enabled": true }],
"metrics": [{ "category": "AllMetrics", "enabled": true }]
}Azure enforces that at least one enabled category must exist when a destination is configured. But it does not enforce that log categories must be enabled. A setting with all logs disabled and only metrics enabled is perfectly valid — and from Azure's perspective, fully functional.
This separation is the foundation of the stealth impairment technique.
1. No integrity protection
Diagnostic settings have no tamper detection. Azure does not alert when log categories are disabled. The Activity Log records that a write operation occurred, but does not capture which categories changed or their before/after state.
2. Broad permission distribution
The Microsoft.Insights/diagnosticSettings/write permission is included in Contributor, Owner, and Monitoring Contributor — roles frequently assigned to service principals, automation accounts, and developers. Any compromised identity with these roles can modify diagnostic settings across all resources in scope.
3. No separation of duty
The same principal that operates a resource can also modify its diagnostic settings. There is no built-in mechanism to enforce that monitoring configuration is managed by a separate security team.
4. Shallow monitoring defaults
Most organizations monitor whether diagnostic settings exist but not what categories are enabled. Azure Advisor checks for the presence of settings, not their content. Microsoft Sentinel's built-in analytics do not include rules that detect category-level changes.
5. API behavior quirks
Azure will silently auto-delete a diagnostic setting if all categories (logs and metrics) are set to disabled. This means a naive attempt to disable everything actually removes the setting, which is more detectable. The correct approach — disabling logs while enabling metrics — requires understanding this behavior.
| Technique | Detection Difficulty | Persistence | Scope |
|---|---|---|---|
| Disable Sentinel analytics rule | Low — rule change is logged and auditable | Until re-enabled | Per rule |
| Modify NSG to allow traffic | Medium — NSG flow logs may capture it | Until reverted | Per NSG |
| Disable diagnostic log categories | High — requires category-level audit | Until manually restored | Per resource |
| Clear Activity Log | Not possible — Activity Log is immutable | N/A | N/A |
| Delete Log Analytics data | Very High — requires workspace permissions | Permanent data loss | Per workspace |
Diagnostic setting manipulation occupies a unique position: it's highly persistent (survives reboots and service restarts), broadly scoped (affects all telemetry from a resource), and difficult to detect without purpose-built monitoring.
Approach: Disable all log categories, enable all metric categories, leave destinations unchanged.
This is the highest-stealth technique because:
- Destinations remain unchanged — the Azure Portal shows the original Log Analytics workspace or Storage Account as the target. A defender inspecting the setting sees the correct configuration.
- Metrics continue flowing — health dashboards, availability alerts, and operational monitoring remain fully functional. There is no operational impact that would trigger investigation.
- The setting still exists — tools that check "does this resource have diagnostic settings?" return true. Compliance checks pass.
- No data gaps in metrics — time-series data in Log Analytics or Azure Monitor continues uninterrupted, so there is no visible gap in monitoring charts.
The only way to detect this is to enumerate each diagnostic setting and inspect the enabled property of individual log categories. At scale (hundreds or thousands of resources), this requires automation that most organizations have not implemented.
Applicable to: Any resource type that supports both log and metric categories — which covers the vast majority of Azure services including Key Vaults, SQL Databases, App Services, Virtual Machines, Storage Accounts, and their sub-services (blob, queue, table, file). The Transaction metric is universally available across Azure Storage resources at all levels.
Important nuance: A diagnostic setting may exist with only log categories configured (the defender never enabled metrics in that setting). The resource still supports metrics — the setting just doesn't include them. The function handles this by querying the diagnosticSettingsCategories API to discover available metric categories and injecting them into the PUT request. This ensures category manipulation works even when the original setting was log-only.
Approach: Remove original destinations (Log Analytics, Event Hub), replace with a storage account under attacker control or a low-visibility sink.
This technique is required when a resource genuinely has no metric categories available. Without at least one enabled category, Azure will auto-delete the diagnostic setting. Since these resources only have log categories, the logs must remain enabled — but the destination can be changed to a storage account where the data is effectively buried.
In practice, this fallback is rarely needed. Most Azure resources — including Storage Account sub-services (blob, queue, table, file) — support the Transaction metric. The function first attempts to discover available metric categories via the diagnosticSettingsCategories API and inject them into the setting. Destination redirection only activates when the API confirms no metric categories exist for the resource.
Applicable to: The small number of resource types that genuinely lack metric support, such as certain Network Security Group flow log configurations. For most resources, including all Storage Account sub-services, the primary category manipulation technique applies.
Sink selection considerations: A redirected destination must be a valid Azure Storage Account. Selecting the sink requires care:
- Self-redirection (pointing a storage account's sub-service logs back to the same account) is functionally a no-op — the logs would still exist.
- The sink should be a storage account the defender is unlikely to monitor.
- Auto-discovery can select the first non-target storage account in the subscription.
Approach: DELETE the diagnostic setting entirely.
This is the most impactful but least stealthy technique. The diagnostic setting disappears from the Azure Portal, creating an obvious gap that compliance scans and manual review will detect.
Use cases are limited to scenarios where stealth is not a priority, such as creating noise to distract incident response teams, or in environments without mature monitoring practices.
Storage accounts present a unique challenge because Azure splits their diagnostic settings across multiple resource levels:
storageAccounts/{name} ← Account-level
storageAccounts/{name}/blobServices/default ← Blob operations
storageAccounts/{name}/queueServices/default ← Queue operations
storageAccounts/{name}/tableServices/default ← Table operations
storageAccounts/{name}/fileServices/default ← File operations
All five levels support both log categories (StorageRead, StorageWrite, StorageDelete) and the Transaction metric. However, a common defender configuration is to enable only log categories on sub-services without including metrics. This creates a situation where the existing diagnostic setting appears to lack metrics, even though the resource supports them.
Disable-DiagnosticSetting handles this in two ways:
-
Automatic sub-resource expansion — Any storage account resource ID is expanded into all five targets (account + four sub-services), ensuring no logging level is missed.
-
Metric category discovery — When an existing setting lacks metrics, the function queries the
diagnosticSettingsCategoriesAPI on the resource. If the resource supportsTransaction(which all storage sub-services do), the metric is injected into the PUT request. This enables the stealth category manipulation technique even on settings that were originally configured with logs only.
The result: all storage sub-services use the primary stealth approach (disable logs, enable Transaction metric, preserve destinations). The destination redirect fallback is not needed for storage resources.
After category-level disablement:
- The diagnostic setting still appears in the list
- Destinations show the original Log Analytics workspace
- Clicking into the setting reveals individual category checkboxes — logs show as unchecked, metrics as checked
- This level of detail is rarely inspected during routine operations
Every diagnostic setting modification generates an Activity Log entry:
Operation: Microsoft.Insights/diagnosticSettings/write
Status: Succeeded
Caller: compromised-sp@tenant.onmicrosoft.com
The Activity Log records that the operation happened but does not record the payload (which categories were changed). An analyst sees "diagnostic setting was modified" but cannot determine from the Activity Log alone whether logs were disabled.
After impairment, the affected log tables simply stop receiving new records. There are no error entries, no "logging stopped" events, and no gap markers. The table appears to have data up to the impairment time and nothing after.
This is indistinguishable from a resource that was simply idle (no operations to log) unless the analyst correlates the Activity Log write event with the timing of data loss.
Azure Advisor checks for the presence of diagnostic settings, not their content. A resource with all log categories disabled but metrics enabled will show as "diagnostic settings configured" — passing compliance checks.
Microsoft Defender for Cloud does not currently include a recommendation that audits individual log category states within diagnostic settings.
Despite the difficulty, detection is possible with purpose-built monitoring:
AzureActivity
| where OperationNameValue =~ "MICROSOFT.INSIGHTS/DIAGNOSTICSETTINGS/WRITE"
| where ActivityStatusValue == "Success"
| project TimeGenerated, Caller, CallerIpAddress,
ResourceId, CorrelationId
| order by TimeGenerated descThis query surfaces all diagnostic setting modifications. The challenge is filtering legitimate changes from malicious ones — in active environments, diagnostic settings change frequently during deployments and infrastructure updates.
let resources = AzureDiagnostics
| where TimeGenerated > ago(7d)
| summarize LastSeen = max(TimeGenerated) by ResourceId;
resources
| where LastSeen < ago(1d)
| project ResourceId, LastSeen,
GapHours = datetime_diff('hour', now(), LastSeen)This looks for resources that were previously sending diagnostic data but have stopped. The gap detection window must account for legitimate quiet periods (weekends, low-traffic resources).
{
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Insights/diagnosticSettings"
},
{
"count": {
"field": "Microsoft.Insights/diagnosticSettings/logs[*]",
"where": {
"field": "Microsoft.Insights/diagnosticSettings/logs[*].enabled",
"equals": "false"
}
},
"greater": 0
}
]
},
"then": {
"effect": "audit"
}
}This policy flags any diagnostic setting with at least one disabled log category. Use deny effect to prevent the modification entirely, though this may interfere with legitimate configuration changes.
$resources = Get-AzResource
foreach ($r in $resources) {
$ds = Get-AzDiagnosticSetting -ResourceId $r.ResourceId
foreach ($s in $ds) {
$disabled = $s.Logs | Where-Object { -not $_.Enabled }
if ($disabled) {
[PSCustomObject]@{
Resource = $r.Name
Type = $r.ResourceType
Setting = $s.Name
DisabledLogs = ($disabled.Category -join ', ')
}
}
}
}| Control | What It Does | Limitation |
|---|---|---|
| Azure Policy (deny) | Prevents PUTs that disable log categories | May block legitimate changes; requires careful policy design |
| Resource Locks (ReadOnly) | Blocks all writes to the resource, including diagnostic settings | Requires Microsoft.Authorization/locks/delete to remove |
| PIM for Monitoring Contributor | Time-bounds access to diagnostic setting permissions | Requires PIM enrollment and JIT request |
| Conditional Access on ARM | Restricts who can call ARM APIs based on identity, location, device | Does not distinguish diagnostic setting writes from other ARM operations |
| Control | What It Detects | Response Time |
|---|---|---|
| Sentinel analytics rule on Activity Log | diagnosticSettings/write operations |
Near real-time |
| Telemetry gap detection | Resources that stopped sending log data | Hours to days (depends on lookback window) |
| Periodic category audit | Disabled log categories across all resources | Scheduled (daily/weekly) |
| Multi-destination comparison | Discrepancies between Log Analytics and Storage data | Requires dual-destination setup |
Multi-destination logging: Configure diagnostic settings to send to both Log Analytics and a Storage Account. An attacker using category disablement affects both, but destination redirection would only affect one — the other destination would still receive data if logs remain enabled.
Subscription-level Activity Log export: The Activity Log is a separate telemetry stream that captures ARM control-plane operations. Even if resource-level diagnostic settings are impaired, the Activity Log records the modification itself. Export this to a separate, locked-down Log Analytics workspace.
Immutable Storage for log archival: Configure Storage Account immutability policies on diagnostic log containers. While this doesn't prevent the diagnostic setting from being modified, it ensures that logs collected before the impairment cannot be deleted.
Desired State Configuration through IaC: In mature environments, all diagnostic settings are declared in code (Bicep, Terraform) and deployed through a CI/CD pipeline, establishing a known-good baseline. Two detection layers — an NRT Sentinel rule on diagnosticSettings/write operations and a data anomaly detection on ingestion volume drops — can trigger a SOAR playbook that re-executes the IaC pipeline. Any manual modification is automatically overwritten with the declared configuration:
Attacker modifies diagnostic setting
→ NRT analytics rule fires within seconds
→ Automation triggers IaC pipeline
→ Pipeline redeploys declared diagnostic settings
→ Correct configuration is restored
This is uniquely effective because the attacker must compromise both the target resource and the remediation pipeline to achieve persistent impairment. The pipeline should use a managed identity (or service principal) with narrowly scoped Monitoring Contributor permissions, protected by Conditional Access for workload identities, and isolated in a separate management subscription. The key trade-off is the remediation window — typically seconds to minutes — where telemetry is not flowing. Combining this with multi-destination logging minimizes data loss within that gap.
- Resource-level diagnostic logs (data-plane operations, audit events)
- Resource-level metrics (when using category disablement, metrics are explicitly kept enabled)
- All downstream consumers of these logs (Sentinel, custom alerts, workbooks, SOAR playbooks)
- Activity Log — ARM control-plane operations are logged independently and cannot be disabled by the resource owner. The impairment operation itself is recorded here.
- Entra ID logs — Sign-in and audit logs flow through a separate Entra ID diagnostic setting, not resource-level settings.
- Network-level captures — NSG flow logs, traffic analytics, and packet captures are independent.
- Microsoft Defender for Cloud alerts — Threat detection operates through platform-level telemetry that is not controlled by diagnostic settings.
This means the technique creates blind spots in resource-level forensics while leaving control-plane and identity-layer visibility intact. A complete defense evasion operation would need to address these other telemetry sources separately.
| Permission | Included In | Purpose |
|---|---|---|
Microsoft.Insights/diagnosticSettings/read |
Reader, Contributor, Owner, Monitoring Reader | Enumerate existing settings |
Microsoft.Insights/diagnosticSettings/write |
Contributor, Owner, Monitoring Contributor | Modify or create settings |
Microsoft.Insights/diagnosticSettings/delete |
Contributor, Owner, Monitoring Contributor | Remove settings |
The Contributor role — one of the most commonly assigned roles in Azure — includes all three permissions. No elevated or privileged role is required.