Performing .NET diagnostics such as memory dumps and process tracing can be challenging in some scenarios, for example when running distroless containers which do not have an interactive shell. Fortunately .NET provides several options for performing remote diagnostics. The options differ somewhat in their capabilities.
Tool | Description | Supports remote diagnostics |
---|---|---|
diagnostic tools | Command line utilities, which connect to your .NET process to perform diagnostics. | Limited support. dotnet-counters and dotnet-trace support remote diagnostics with shared diagnostic port socket (for example shared /tmp for default port). Currently other utilities such as dotnet-dump require shared process namespace. There is a PR to add such support to dotnet-dump and feature request to expand support to other tools. |
dotnet monitor | Hosts an HTTP API exposing diagnostic functionality targeting your .NET process. | Supported with shared diagnostic port socket (for example shared /tmp for default port). Requires a fair amount of configuration. Connecting to an existing .NET process currently requires knowing and configuring the default diagnostic port socket. There is a feature request to support process discovery. Advanced in-process features such as call stacks and exception monitoring require listen mode and target application configuration, since these features load additional libraries into the target application. |
diagnostics client library | Library on which diagnostic tools and dotnet monitor are based, but which can also be used to develop your own diagnostic tooling. | Supported in the latest release with shared diagnostic port socket (for example shared /tmp for default port). Version 0.2.607501 has added support for creating a DiagnosticClient by connecting to a diagnostic port, which removes the shared process namespace requirement (a related bugfix has not yet been released which somewhat changes the call API). This means that we can remotely connect to a default diagnostics port without a shared process namespace (technically this was previously possible, but required complex implementation and use of reflection to invoke internal members). |
The .NET diagnostic port is the core feature that enables remote process inspection and diagnostics. The diagnostic port implementation is platform specific and on Linux is implemented through Unix Domain Sockets. Therefore remote diagnostics, such as across containers, is therefore only possible when an appropriate storage volume supporting sockets can be shared.
Remote filesystems such as mounted Azure Storage file shares do not provide the necessary socket support. In Azure Container Apps replica-scoped storage (equivalent to Kubernetes emptyDir) provide shared socket support.
Sharing the /tmp
directory between the container from which you are performing diagnostics and the application you wish to diagnose allows diagnostics without additional configuration of the target application. This is done by utilizing the default diagnostic port created under the temp directory. The default diagnostic port is the only port that supports listen mode: "listen is only supported by the Mono runtime on Android or iOS".
Technically dotnet-dsrouter might also be usable to perform remote diagnostics without access to the diagnostic port socket, as long as the router component is hosted on a target with access to the diagnostic socket. Officially however the router tool is restricted to emulated applications on Android, iOS, and tvOS platforms.
The below examples are run against a sample Azure Container App deployed from dotnet-remote-diagnostics.bicep. The sample includes three containers:
- An ASP.NET Core chiseled image as the target of diagnostic tests. Since it is a chiseled image we cannot interactively access the container and it lacks the capability for running .NET diagnostic tooling.
- A dotnet-monitor image which allows us to execute diagnostic operations against the above container. It hosts a diagnostic port listener on a shared socket address to which the application connects. This image also lacks a shell.
- An azurelinux image to allow shell access.
- A storage account authorized to the identity of the container app, allowing dotnet-monitor to egress diagnostic artifacts as blobs to this account. Also includes a file share to allow accessing artifacts egressed using tooling.
The example app includes three volumes:
tmp-volume
mounted as/tmp
on the application and azurelinux containers, to allow us to connect to the default diagnostic port from the shell using diagnostic tooling and the diagnostic client library (also used as a share to access e.g. memory dumps created in the context of the application).dotnet-monitor-volume
mounted as/dotnet-monitor
on the application and dotnet-monitor images to allow the application to connect to the dotnet-monitor listener socket.diagnostics-volume
mounted as/dotnet-diagnostics
on azurelinux to allow the shell to easily egress diagnostic artifacts to a file share.
Currently only dotnet-trace and dotnet-counters support the --diagnostic-port
parameter. In order to perform remote diagnostics, make sure you mount the default diagnostic port directory /tmp
as a shared volume between your application and the container from which you are performing diagnostics. First you need to determine the default diagnostic port socket your application is listening on.
If you are using the example deployment, connect to the Azure Linux management container console using the Azure portal or az containerapp exec.
az containerapp exec --resource-group <rg> --name <cap> --container azurelinux --command /bin/bash
Note
Azure Container Apps also has a preview debug console accessible using az containerapp debug that creates a sidecar container sharing process namespace but cannot currently be used for diagnostic tooling. Running diagnostic tooling from this debug sidecar might eventually be possible.
In the below example we find one diagnostic port socket for application with pid 1 (typical with containers) with a disambiguation key 23139 (time "jiffies" since boot time):
> find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket"
/tmp/dotnet-diagnostic-1-23139-socket
Then you can do the following to collect performance counters from the first .net process found. Note the ,connect
at the end of the --diagnostic-port
parameter designating we want to connect to the existing default diagnostic port that the application .NET runtime is listening on.
dotnet-counters collect --diagnostic-port "$(find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket" -print -quit),connect" -o /dotnet-diagnostics/dotnet-counters.csv --counters System.Runtime,Microsoft.AspNetCore.Hosting,Microsoft-AspNetCore-Server-Kestrel,System.Net.Sockets --duration 00:01:00
Example output into dotnet-counters.csv
:
Timestamp,Provider,Counter Name,Counter Type,Mean/Increment
02/17/2025 14:32:56,System.Runtime,CPU Usage (%),Metric,0.6086926437786285
02/17/2025 14:32:56,System.Runtime,Working Set (MB),Metric,106.7008
02/17/2025 14:32:56,System.Runtime,GC Heap Size (MB),Metric,4.876664
The same for collecting trace logs, including transforming of the cpu sampling traces to Speedscope format:
dotnet-trace collect --diagnostic-port "$(find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket" -print -quit),connect" -o /dotnet-diagnostics/dotnet-trace.nettrace --profile cpu-sampling --format Speedscope --duration 00:01:00
You can view the nettrace
using Perfview:
Or speedscope.json
using the online app:
The diagnostics client library version 0.2.607501 has added support for creating a DiagnosticClient
by connecting to a diagnostic port, which removes the shared process namespace requirement. This means that we can remotely connect to a default diagnostics port without a shared process namespace (technically this was previously possible, but required complex implementation and use of reflection to invoke internal members).
First we want to start a PowerShell session, while also fixing a bug in the Azure Container App default TERM variable configuration.
TERM=xterm-256color pwsh
Then we can:
- Install the diagnostic client library locally
- Load it into the PowerShell session
- Find the correct remote diagnostic port socket
- Create a DiagnosticClient connected to the port
- Take a memory dump of the process
- Copy the memory dump from the shared
/tmp
volume into the Azure file share/dotnet-diagnostics
volume
Note that some functions such as taking memory dumps are performed by the target .NET runtime, and therefore to a destination file defined in the context of the target process filesystem. Therefore the last part copies the dump from the shared tmp to the Azure file share.
# install diagnostics sdk
Install-Package -ProviderName NuGet -Name Microsoft.Diagnostics.NETCore.Client -RequiredVersion 0.2.607501 -ExcludeVersion -Destination . -Force -SkipDependencies
# load the diagnostics client library
Add-Type -Path "./Microsoft.Diagnostics.NETCore.Client/lib/netstandard2.0/Microsoft.Diagnostics.NETCore.Client.dll"
# list the default diagnstic sockets in shared tmp volume
$socketfilename = Get-ChildItem -File -Path "/tmp/dotnet-diagnostic-*-*-socket" | Where-Object {$_.UnixMode -like 's*'} | Select-Object -First 1 -Expand FullName
# connect to diagnostic socket
$client = [Microsoft.Diagnostics.NETCore.Client.DiagnosticsClient]::FromDiagnosticPort("$socketfilename,connect", [threading.cancellationtoken]::none).result
# if using newer https://github.com/dotnet/diagnostics/pull/5073
# $client = [Microsoft.Diagnostics.NETCore.Client.DiagnosticsClientConnector]::FromDiagnosticPort("$socketfilename,connect", [threading.cancellationtoken]::none).result
# write a full memory dump to shared tmp
$client.writedump([Microsoft.Diagnostics.NETCore.Client.DumpType]::Full, "/tmp/core.dmp", $true)
# copy it to azure fileshare
Copy-Item "/tmp/core.dmp" "/dotnet-diagnostics/core.dmp"
The dotnet monitor tool makes it easier to get access to diagnostic information from multiple remote .NET processes and making the diagnostic capabilities available outside the context of the Unix Domain Socket over an HTTP API. The dotnet monitor does this by working in either listen or connect mode.
- Listen mode: allow monitoring multiple processes, but requires the target processes to be configured to connect to the monitor by configuring an additional diagnostic port using
DOTNET_DiagnosticPorts
. - Connect mode: allow monitoring a predefined set of .NET processes. Allows connecting to the default diagnostic port if the
/tmp
volume is shared.
Note that when configuring the target process in connect
mode, consider whether to use suspend
or nosuspend
, depending on how important it is to access telemetry from the process startup phase.
To perform diagnostics we need to be able to trigger the creation of diagnostic artifacts to the storage account. This is done using the dotnet monitor REST API. Since the dotnet monitor endpoint at http://localhost:52323
is not exposed through Azure Container Apps ingress, we need to invoke this from within the app environment itself.
Now we can use curl to get information about the .NET monitor service:
> curl -fsS http://localhost:52323/info | jq
{
"version": "8.0.8-servicing.25104.1+9b26e1b3a55f73616839ee051542651d387c4a19",
"runtimeVersion": "8.0.13",
"diagnosticPortMode": "Listen",
"diagnosticPortName": "/dotnet-monitor/dotnet-monitor.sock"
}
And list the processes that can be targeted in other operations:
> curl -fsS http://localhost:52323/processes | jq
[
{
"pid": 1,
"uid": "833317f4-a06e-487b-ae9b-2c32a799bd82",
"name": "aspnetapp",
"isDefault": true
}
]
You can target a specific process by pid or uid. In our example you can also omit the process specifier completely since only one .NET process is attached to the monitor listener making it the default process. For a particular process you can get detailed process information:
> curl -fsS http://localhost:52323/process | jq
{
"pid": 1,
"uid": "833317f4-a06e-487b-ae9b-2c32a799bd82",
"name": "aspnetapp",
"commandLine": "/app/aspnetapp",
"operatingSystem": "Linux",
"processArchitecture": "x64"
}
As well as the process environment variables:
> curl -fsS http://localhost:52323/env | jq
{
"CONTAINER_APP_NAME": "cap-containerapp",
"PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"DOTNET_DiagnosticPorts": "/dotnet-monitor/dotnet-monitor.sock",
"KUBERNETES_PORT_443_TCP_PORT": "443",
"KUBERNETES_PORT_443_TCP_ADDR": "100.100.224.1",
"ASPNETCORE_HTTP_PORTS": "8080",
"...": "..."
}
Memory dumps can be taken using /dump.
> curl http://localhost:52323/dump?egressProvider=monitorBlob
A .NET memory dump is a snapshot of the entire process’s memory at a point in time. It includes not only the managed objects (allocated on the garbage‐collected heap) but also native memory, thread stacks, the runtime itself, and any other data residing in the process. This type of dump is generally used for comprehensive debugging, such as when investigating crashes or complex issues that might involve both managed and unmanaged code.
This will save a memory dump with the core_
prefix to blob storage. You can analyze this memory dump for example using WinDbg. You may need to install the .NET debugger extensions to enable debugging. Check out more detailed instructions on analyzing Linux dumps. For example WinDbg:
GC (Garbage Collector) dumps can be taken using /gcdump.
> curl http://localhost:52323/gcdump?egressProvider=monitorBlob
A GC dump is a snapshot of the managed heap i.e. the set of objects that the .NET garbage collector is managing. This dump focuses on the state of managed objects, their generations, and relationships, which is useful when diagnosing memory leaks or issues related solely to the managed part of an application.
This will save a .gcdump
blob which you can analyze for example using WinDbg or Perfview, for example Perfview:
Diagnostic traces can be taken using /trace. Take a trace with a default trace profile including Cpu usage, ASP.NET requests and HttpClient requests, as well as event counters for 30 seconds:
> curl http://localhost:52323/trace?egressProvider=monitorBlob
This will save a .nettrace
blob which you can analyze for example using Perfview.
Metrics can be monitored /livemetrics. These give metrics in JSON format e.g.:
Running curl http://localhost:52323/livemetrics?egressProvider=monitorBlob
will emit a .metrics.json
blob with for example:
{"timestamp":"2025-02-11T10:52:45.9427509+00:00","provider":"System.Runtime","name":"cpu-usage","displayName":"CPU Usage","unit":"%","counterType":"Metric","tags":"","value":0.008740756370863644}
{"timestamp":"2025-02-11T10:52:45.9428311+00:00","provider":"System.Runtime","name":"working-set","displayName":"Working Set","unit":"MB","counterType":"Metric","tags":"","value":97.042432}
There is also a /metrics API for metrics in Prometheus text-based exposition format. This endpoint differs from livemetrics
in that it does not require using an egress provider and returns metrics as the http response:
> curl http://localhost:52323/metrics
# HELP systemruntime_cpu_usage_ratio CPU Usage
# TYPE systemruntime_cpu_usage_ratio gauge
systemruntime_cpu_usage_ratio 0.0042445116638396615 1739271200943
systemruntime_cpu_usage_ratio 0.004385383815808173 1739271205942
systemruntime_cpu_usage_ratio 0.004884701520313601 1739271210942
# HELP systemruntime_working_set_bytes Working Set
# TYPE systemruntime_working_set_bytes gauge
systemruntime_working_set_bytes 99098624 1739271195942
systemruntime_working_set_bytes 99086336 1739271200943
systemruntime_working_set_bytes 99074048 1739271205942
Note
Using these features require .NET monitor in-process features to be enabled. Because these libraries are loaded into the target application (they are not loaded into dotnet monitor), they may have performance impact on memory and CPU utilization in the target application.
You can capture call stacks of the target process using /stacks:
> curl http://localhost:52323/stacks?egressProvider=monitorBlob
For example showing the ASP.NET Core main thread:
Thread: (0x1)
System.Private.CoreLib.dll!System.Threading.Monitor.Wait
System.Private.CoreLib.dll!System.Threading.ManualResetEventSlim.Wait
System.Private.CoreLib.dll!System.Threading.Tasks.Task.SpinThenBlockingWait
System.Private.CoreLib.dll!System.Threading.Tasks.Task.InternalWaitCore
System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification
Microsoft.Extensions.Hosting.Abstractions.dll!Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run
aspnetapp.dll!Program.<Main>$
[NativeFrame]
You can list first chance exceptions thrown by your application:
> curl http://localhost:52323/exceptions?egressProvider=monitorBlob
You can get ILogger logs emitted by your application. The LoggingEventSource provider must be registered in order to capture logs (default host builders automatically register this).
> curl http://localhost:52323/logs?egressProvider=monitorBlob
And finally you can capture method parameters for the invocations of your desired method using /parameters:
> curl -X POST http://localhost:52323/parameters?egressProvider=monitorBlob -H "Content-Type: application/json" -d '{"methods":[{"moduleName":"aspnetapp.dll","typeName":"aspnetapp.Pages.Pages_Privacy","methodName":"ExecuteAsync"}],"captureLimit":3}'