Skip to content

Instantly share code, notes, and snippets.

@maskati
Created February 17, 2025 16:19
Show Gist options
  • Save maskati/0d9d678a075e4c0aa3d9bdf2ec12f5c7 to your computer and use it in GitHub Desktop.
Save maskati/0d9d678a075e4c0aa3d9bdf2ec12f5c7 to your computer and use it in GitHub Desktop.
Diagnosing .NET applications remotely

Diagnosing .NET applications remotely

Performing .NET diagnostics such as memory dumps and process tracing can be challenging in some scenarios, for example when running distroless containers which do not have an interactive shell. Fortunately .NET provides several options for performing remote diagnostics. The options differ somewhat in their capabilities.

Tool Description Supports remote diagnostics
diagnostic tools Command line utilities, which connect to your .NET process to perform diagnostics. Limited support. dotnet-counters and dotnet-trace support remote diagnostics with shared diagnostic port socket (for example shared /tmp for default port). Currently other utilities such as dotnet-dump require shared process namespace. There is a PR to add such support to dotnet-dump and feature request to expand support to other tools.
dotnet monitor Hosts an HTTP API exposing diagnostic functionality targeting your .NET process. Supported with shared diagnostic port socket (for example shared /tmp for default port). Requires a fair amount of configuration. Connecting to an existing .NET process currently requires knowing and configuring the default diagnostic port socket. There is a feature request to support process discovery. Advanced in-process features such as call stacks and exception monitoring require listen mode and target application configuration, since these features load additional libraries into the target application.
diagnostics client library Library on which diagnostic tools and dotnet monitor are based, but which can also be used to develop your own diagnostic tooling. Supported in the latest release with shared diagnostic port socket (for example shared /tmp for default port). Version 0.2.607501 has added support for creating a DiagnosticClient by connecting to a diagnostic port, which removes the shared process namespace requirement (a related bugfix has not yet been released which somewhat changes the call API). This means that we can remotely connect to a default diagnostics port without a shared process namespace (technically this was previously possible, but required complex implementation and use of reflection to invoke internal members).

The .NET diagnostic port is the core feature that enables remote process inspection and diagnostics. The diagnostic port implementation is platform specific and on Linux is implemented through Unix Domain Sockets. Therefore remote diagnostics, such as across containers, is therefore only possible when an appropriate storage volume supporting sockets can be shared.

Remote filesystems such as mounted Azure Storage file shares do not provide the necessary socket support. In Azure Container Apps replica-scoped storage (equivalent to Kubernetes emptyDir) provide shared socket support.

Sharing the /tmp directory between the container from which you are performing diagnostics and the application you wish to diagnose allows diagnostics without additional configuration of the target application. This is done by utilizing the default diagnostic port created under the temp directory. The default diagnostic port is the only port that supports listen mode: "listen is only supported by the Mono runtime on Android or iOS".

Technically dotnet-dsrouter might also be usable to perform remote diagnostics without access to the diagnostic port socket, as long as the router component is hosted on a target with access to the diagnostic socket. Officially however the router tool is restricted to emulated applications on Android, iOS, and tvOS platforms.

Example

The below examples are run against a sample Azure Container App deployed from dotnet-remote-diagnostics.bicep. The sample includes three containers:

  • An ASP.NET Core chiseled image as the target of diagnostic tests. Since it is a chiseled image we cannot interactively access the container and it lacks the capability for running .NET diagnostic tooling.
  • A dotnet-monitor image which allows us to execute diagnostic operations against the above container. It hosts a diagnostic port listener on a shared socket address to which the application connects. This image also lacks a shell.
  • An azurelinux image to allow shell access.
  • A storage account authorized to the identity of the container app, allowing dotnet-monitor to egress diagnostic artifacts as blobs to this account. Also includes a file share to allow accessing artifacts egressed using tooling.

The example app includes three volumes:

  • tmp-volume mounted as /tmp on the application and azurelinux containers, to allow us to connect to the default diagnostic port from the shell using diagnostic tooling and the diagnostic client library (also used as a share to access e.g. memory dumps created in the context of the application).
  • dotnet-monitor-volume mounted as /dotnet-monitor on the application and dotnet-monitor images to allow the application to connect to the dotnet-monitor listener socket.
  • diagnostics-volume mounted as /dotnet-diagnostics on azurelinux to allow the shell to easily egress diagnostic artifacts to a file share.

Using diagnostic tools

Currently only dotnet-trace and dotnet-counters support the --diagnostic-port parameter. In order to perform remote diagnostics, make sure you mount the default diagnostic port directory /tmp as a shared volume between your application and the container from which you are performing diagnostics. First you need to determine the default diagnostic port socket your application is listening on.

If you are using the example deployment, connect to the Azure Linux management container console using the Azure portal or az containerapp exec.

az containerapp exec --resource-group <rg> --name <cap> --container azurelinux --command /bin/bash

Note

Azure Container Apps also has a preview debug console accessible using az containerapp debug that creates a sidecar container sharing process namespace but cannot currently be used for diagnostic tooling. Running diagnostic tooling from this debug sidecar might eventually be possible.

In the below example we find one diagnostic port socket for application with pid 1 (typical with containers) with a disambiguation key 23139 (time "jiffies" since boot time):

> find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket"

/tmp/dotnet-diagnostic-1-23139-socket

Then you can do the following to collect performance counters from the first .net process found. Note the ,connect at the end of the --diagnostic-port parameter designating we want to connect to the existing default diagnostic port that the application .NET runtime is listening on.

dotnet-counters collect --diagnostic-port "$(find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket" -print -quit),connect" -o /dotnet-diagnostics/dotnet-counters.csv --counters System.Runtime,Microsoft.AspNetCore.Hosting,Microsoft-AspNetCore-Server-Kestrel,System.Net.Sockets --duration 00:01:00

Example output into dotnet-counters.csv:

Timestamp,Provider,Counter Name,Counter Type,Mean/Increment
02/17/2025 14:32:56,System.Runtime,CPU Usage (%),Metric,0.6086926437786285
02/17/2025 14:32:56,System.Runtime,Working Set (MB),Metric,106.7008
02/17/2025 14:32:56,System.Runtime,GC Heap Size (MB),Metric,4.876664

The same for collecting trace logs, including transforming of the cpu sampling traces to Speedscope format:

dotnet-trace collect --diagnostic-port "$(find /tmp -maxdepth 1 -type s -name "dotnet-diagnostic-*-socket" -print -quit),connect" -o /dotnet-diagnostics/dotnet-trace.nettrace --profile cpu-sampling --format Speedscope --duration 00:01:00

You can view the nettrace using Perfview:

image

Or speedscope.json using the online app:

image

Using the diagnostics client library

The diagnostics client library version 0.2.607501 has added support for creating a DiagnosticClient by connecting to a diagnostic port, which removes the shared process namespace requirement. This means that we can remotely connect to a default diagnostics port without a shared process namespace (technically this was previously possible, but required complex implementation and use of reflection to invoke internal members).

First we want to start a PowerShell session, while also fixing a bug in the Azure Container App default TERM variable configuration.

TERM=xterm-256color pwsh

Then we can:

  • Install the diagnostic client library locally
  • Load it into the PowerShell session
  • Find the correct remote diagnostic port socket
  • Create a DiagnosticClient connected to the port
  • Take a memory dump of the process
  • Copy the memory dump from the shared /tmp volume into the Azure file share /dotnet-diagnostics volume

Note that some functions such as taking memory dumps are performed by the target .NET runtime, and therefore to a destination file defined in the context of the target process filesystem. Therefore the last part copies the dump from the shared tmp to the Azure file share.

# install diagnostics sdk
Install-Package -ProviderName NuGet -Name Microsoft.Diagnostics.NETCore.Client -RequiredVersion 0.2.607501 -ExcludeVersion -Destination . -Force -SkipDependencies

# load the diagnostics client library
Add-Type -Path "./Microsoft.Diagnostics.NETCore.Client/lib/netstandard2.0/Microsoft.Diagnostics.NETCore.Client.dll"

# list the default diagnstic sockets in shared tmp volume
$socketfilename = Get-ChildItem -File -Path "/tmp/dotnet-diagnostic-*-*-socket" | Where-Object {$_.UnixMode -like 's*'} | Select-Object -First 1 -Expand FullName

# connect to diagnostic socket
$client = [Microsoft.Diagnostics.NETCore.Client.DiagnosticsClient]::FromDiagnosticPort("$socketfilename,connect", [threading.cancellationtoken]::none).result
# if using newer https://github.com/dotnet/diagnostics/pull/5073
# $client = [Microsoft.Diagnostics.NETCore.Client.DiagnosticsClientConnector]::FromDiagnosticPort("$socketfilename,connect", [threading.cancellationtoken]::none).result

# write a full memory dump to shared tmp
$client.writedump([Microsoft.Diagnostics.NETCore.Client.DumpType]::Full, "/tmp/core.dmp", $true)

# copy it to azure fileshare
Copy-Item "/tmp/core.dmp" "/dotnet-diagnostics/core.dmp"

Using dotnet monitor

The dotnet monitor tool makes it easier to get access to diagnostic information from multiple remote .NET processes and making the diagnostic capabilities available outside the context of the Unix Domain Socket over an HTTP API. The dotnet monitor does this by working in either listen or connect mode.

  • Listen mode: allow monitoring multiple processes, but requires the target processes to be configured to connect to the monitor by configuring an additional diagnostic port using DOTNET_DiagnosticPorts.
  • Connect mode: allow monitoring a predefined set of .NET processes. Allows connecting to the default diagnostic port if the /tmp volume is shared.

Note that when configuring the target process in connect mode, consider whether to use suspend or nosuspend, depending on how important it is to access telemetry from the process startup phase.

To perform diagnostics we need to be able to trigger the creation of diagnostic artifacts to the storage account. This is done using the dotnet monitor REST API. Since the dotnet monitor endpoint at http://localhost:52323 is not exposed through Azure Container Apps ingress, we need to invoke this from within the app environment itself.

Now we can use curl to get information about the .NET monitor service:

> curl -fsS http://localhost:52323/info | jq
{
  "version": "8.0.8-servicing.25104.1+9b26e1b3a55f73616839ee051542651d387c4a19",
  "runtimeVersion": "8.0.13",
  "diagnosticPortMode": "Listen",
  "diagnosticPortName": "/dotnet-monitor/dotnet-monitor.sock"
}

And list the processes that can be targeted in other operations:

> curl -fsS http://localhost:52323/processes | jq
[
  {
    "pid": 1,
    "uid": "833317f4-a06e-487b-ae9b-2c32a799bd82",
    "name": "aspnetapp",
    "isDefault": true
  }
]

You can target a specific process by pid or uid. In our example you can also omit the process specifier completely since only one .NET process is attached to the monitor listener making it the default process. For a particular process you can get detailed process information:

> curl -fsS http://localhost:52323/process | jq
{
  "pid": 1,
  "uid": "833317f4-a06e-487b-ae9b-2c32a799bd82",
  "name": "aspnetapp",
  "commandLine": "/app/aspnetapp",
  "operatingSystem": "Linux",
  "processArchitecture": "x64"
}

As well as the process environment variables:

> curl -fsS http://localhost:52323/env | jq
{
  "CONTAINER_APP_NAME": "cap-containerapp",
  "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
  "DOTNET_DiagnosticPorts": "/dotnet-monitor/dotnet-monitor.sock",
  "KUBERNETES_PORT_443_TCP_PORT": "443",
  "KUBERNETES_PORT_443_TCP_ADDR": "100.100.224.1",
  "ASPNETCORE_HTTP_PORTS": "8080",
  "...": "..."
}

Memory dump

Memory dumps can be taken using /dump.

> curl http://localhost:52323/dump?egressProvider=monitorBlob

A .NET memory dump is a snapshot of the entire process’s memory at a point in time. It includes not only the managed objects (allocated on the garbage‐collected heap) but also native memory, thread stacks, the runtime itself, and any other data residing in the process. This type of dump is generally used for comprehensive debugging, such as when investigating crashes or complex issues that might involve both managed and unmanaged code.

This will save a memory dump with the core_ prefix to blob storage. You can analyze this memory dump for example using WinDbg. You may need to install the .NET debugger extensions to enable debugging. Check out more detailed instructions on analyzing Linux dumps. For example WinDbg:

image

GC dump

GC (Garbage Collector) dumps can be taken using /gcdump.

> curl http://localhost:52323/gcdump?egressProvider=monitorBlob

A GC dump is a snapshot of the managed heap i.e. the set of objects that the .NET garbage collector is managing. This dump focuses on the state of managed objects, their generations, and relationships, which is useful when diagnosing memory leaks or issues related solely to the managed part of an application.

This will save a .gcdump blob which you can analyze for example using WinDbg or Perfview, for example Perfview:

image

Trace

Diagnostic traces can be taken using /trace. Take a trace with a default trace profile including Cpu usage, ASP.NET requests and HttpClient requests, as well as event counters for 30 seconds:

> curl http://localhost:52323/trace?egressProvider=monitorBlob

This will save a .nettrace blob which you can analyze for example using Perfview.

Metrics

Metrics can be monitored /livemetrics. These give metrics in JSON format e.g.:

Running curl http://localhost:52323/livemetrics?egressProvider=monitorBlob will emit a .metrics.json blob with for example:

{"timestamp":"2025-02-11T10:52:45.9427509+00:00","provider":"System.Runtime","name":"cpu-usage","displayName":"CPU Usage","unit":"%","counterType":"Metric","tags":"","value":0.008740756370863644}
{"timestamp":"2025-02-11T10:52:45.9428311+00:00","provider":"System.Runtime","name":"working-set","displayName":"Working Set","unit":"MB","counterType":"Metric","tags":"","value":97.042432}

There is also a /metrics API for metrics in Prometheus text-based exposition format. This endpoint differs from livemetrics in that it does not require using an egress provider and returns metrics as the http response:

> curl http://localhost:52323/metrics
# HELP systemruntime_cpu_usage_ratio CPU Usage
# TYPE systemruntime_cpu_usage_ratio gauge
systemruntime_cpu_usage_ratio 0.0042445116638396615 1739271200943
systemruntime_cpu_usage_ratio 0.004385383815808173 1739271205942
systemruntime_cpu_usage_ratio 0.004884701520313601 1739271210942
# HELP systemruntime_working_set_bytes Working Set
# TYPE systemruntime_working_set_bytes gauge
systemruntime_working_set_bytes 99098624 1739271195942
systemruntime_working_set_bytes 99086336 1739271200943
systemruntime_working_set_bytes 99074048 1739271205942

In-process features (stacks, exceptions, logs)

Note

Using these features require .NET monitor in-process features to be enabled. Because these libraries are loaded into the target application (they are not loaded into dotnet monitor), they may have performance impact on memory and CPU utilization in the target application.

You can capture call stacks of the target process using /stacks:

> curl http://localhost:52323/stacks?egressProvider=monitorBlob

For example showing the ASP.NET Core main thread:

Thread: (0x1)
  System.Private.CoreLib.dll!System.Threading.Monitor.Wait
  System.Private.CoreLib.dll!System.Threading.ManualResetEventSlim.Wait
  System.Private.CoreLib.dll!System.Threading.Tasks.Task.SpinThenBlockingWait
  System.Private.CoreLib.dll!System.Threading.Tasks.Task.InternalWaitCore
  System.Private.CoreLib.dll!System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification
  Microsoft.Extensions.Hosting.Abstractions.dll!Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run
  aspnetapp.dll!Program.<Main>$
  [NativeFrame]

You can list first chance exceptions thrown by your application:

> curl http://localhost:52323/exceptions?egressProvider=monitorBlob

You can get ILogger logs emitted by your application. The LoggingEventSource provider must be registered in order to capture logs (default host builders automatically register this).

> curl http://localhost:52323/logs?egressProvider=monitorBlob

And finally you can capture method parameters for the invocations of your desired method using /parameters:

> curl -X POST http://localhost:52323/parameters?egressProvider=monitorBlob -H "Content-Type: application/json" -d '{"methods":[{"moduleName":"aspnetapp.dll","typeName":"aspnetapp.Pages.Pages_Privacy","methodName":"ExecuteAsync"}],"captureLimit":3}'
param location string = resourceGroup().location
// storage account to store dotnet-monitor egressed artifacts
resource storageAccount 'Microsoft.Storage/storageAccounts@2023-05-01' = {
name: 'st${uniqueString(resourceGroup().id)}'
location: location
sku: {
name: 'Standard_LRS'
}
kind: 'StorageV2'
resource blobServices 'blobServices' = {
name: 'default'
resource monitorEgressContainer 'containers' = {
name: 'monitoregress'
}
}
resource fileServices 'fileServices' = {
name: 'default'
resource diagnosticsShare 'shares' = {
name: 'diagnosticsshare'
}
}
}
resource containerAppEnvironment 'Microsoft.App/managedEnvironments@2024-03-01' = {
name: 'cae-containerapp'
location: location
properties: {
appLogsConfiguration: {
destination: null
}
workloadProfiles: [
{
name: 'Consumption'
workloadProfileType: 'Consumption'
}
]
}
resource diagnosticsStorage 'storages' = {
name: 'diagnosticsstorage'
properties: {
azureFile: {
accountName: storageAccount.name
shareName: storageAccount::fileServices::diagnosticsShare.name
accountKey: storageAccount.listKeys().keys[0].value
accessMode: 'ReadWrite'
}
}
}
}
// identity used to dotnet-monitor to access storage account
resource identity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
name: 'id-containerapp'
location: location
}
// permission to write artifacts to storage account
var storageBlobDataContributorRoleDefinitionId = subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')
var storageBlobDataContributorRoleAssignmentName = guid(identity.id, storageAccount.id, storageBlobDataContributorRoleDefinitionId)
resource storageBlobDataContributorRoleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: storageBlobDataContributorRoleAssignmentName
scope: storageAccount
properties: {
roleDefinitionId: storageBlobDataContributorRoleDefinitionId
principalType: 'ServicePrincipal'
principalId: identity.properties.principalId
}
}
resource containerApp 'Microsoft.App/containerApps@2024-03-01' = {
name: 'cap-containerapp'
location: location
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${identity.id}': {}
}
}
properties: {
environmentId: containerAppEnvironment.id
workloadProfileName: 'Consumption'
configuration: {
ingress: {
external: true
allowInsecure: false
transport: 'auto'
// https://learn.microsoft.com/en-us/dotnet/core/compatibility/containers/8.0/aspnet-port
targetPort: 8080
}
}
template: {
volumes: [
// replica-scoped ephemeral storage
// used to create diagnostic port ipc socket between the aspnetapp and dotnet-monitor containers
{
name: 'dotnet-monitor-volume'
storageType: 'EmptyDir'
}
{
name: 'tmp-volume'
storageType: 'EmptyDir'
}
{
name: 'diagnostics-volume'
storageType: 'AzureFile'
storageName: containerAppEnvironment::diagnosticsStorage.name
}
]
containers: [
// application container
{
name: 'aspnetapp'
// https://github.com/dotnet/dotnet-docker/blob/main/documentation/distroless.md
// https://mcr.microsoft.com/en-us/artifact/mar/dotnet/samples/about
image: 'mcr.microsoft.com/dotnet/samples:aspnetapp-chiseled-8.0'
env: [
// connect to dotnet-monitor listener on shared socket
// https://learn.microsoft.com/en-us/dotnet/core/diagnostics/diagnostic-port#configure-additional-diagnostic-ports
{
name: 'DOTNET_DiagnosticPorts'
value: '/dotnet-monitor/dotnet-monitor.sock'
}
]
volumeMounts: [
{
volumeName: 'dotnet-monitor-volume'
mountPath: '/dotnet-monitor'
}
{
volumeName: 'tmp-volume'
mountPath: '/tmp'
}
]
resources: {
cpu: json('0.25')
memory: '0.5Gi'
}
}
// monitor container
{
name: 'monitor'
// https://learn.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-monitor
// https://github.com/dotnet/dotnet-monitor
// https://mcr.microsoft.com/en-us/artifact/mar/dotnet/monitor/about
image: 'mcr.microsoft.com/dotnet/monitor:8.0'
env: [
// if DOTNETMONITOR_DiagnosticPort__ConnectionMode is Listen and DOTNETMONITOR_Storage__DefaultSharedPath is set
// no need to configure DOTNETMONITOR_DiagnosticPort__EndpointName
// the combination automatically creates a Unix Domain Socket named dotnet-monitor.sock under the default shared path
// => in this case /dotnet-monitor/dotnet-monitor.sock
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/configuration/diagnostic-port-configuration.md
{
name: 'DOTNETMONITOR_DiagnosticPort__ConnectionMode'
value: 'Listen'
}
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/configuration/storage-configuration.md#default-shared-path
{
name: 'DOTNETMONITOR_Storage__DefaultSharedPath'
value: '/dotnet-monitor'
}
// bindings for the diagnostics api, http ok on internal endpoint
{
name: 'DOTNETMONITOR_Urls'
value: 'http://+:52323'
}
// binds to metric urls that only expose the /metrics endpoint
// unlike the other endpoints the metrics urls do not require authentication as metrics generally safe
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/configuration/metrics-configuration.md#metrics-urls
{
name: 'DOTNETMONITOR_Metrics__Endpoints'
value: 'http://+:52325'
}
// use non-json logging to stdout
// https://learn.microsoft.com/en-us/dotnet/core/extensions/console-log-formatter#simple
{
name: 'Logging__Console__FormatterName'
value: 'simple'
}
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/configuration/egress-configuration.md
{
name: 'DOTNETMONITOR_Egress__AzureBlobStorage__monitorBlob__accountUri'
value: storageAccount.properties.primaryEndpoints.blob
}
{
name: 'DOTNETMONITOR_Egress__AzureBlobStorage__monitorBlob__containerName'
value: storageAccount::blobServices::monitorEgressContainer.name
}
{
name: 'DOTNETMONITOR_Egress__AzureBlobStorage__monitorBlob__managedIdentityClientId'
value: identity.properties.clientId
}
// Enable Call Stacks and Exceptions History
// Some features of dotnet monitor require loading libraries into target applications
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/configuration/in-process-features-configuration.md
{
name: 'DOTNETMONITOR_InProcessFeatures__Enabled'
value: 'true'
}
// Captures parameters for one or more methods each time they are called
// This feature is currently marked as experimental and so needs to be explicitly enabled
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/api/parameters.md#enabling
{
name: 'DOTNETMONITOR_InProcessFeatures__ParameterCapturing__Enabled'
value: 'true'
}
]
// https://github.com/dotnet/dotnet-monitor/blob/3498f108bd0248e4f7aeb649364e3e02fed6fb94/documentation/compatibility/7.0/README.md#docker-container-entrypoint-split
// https://learn.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-monitor#dotnet-monitor-collect
args: [
'collect'
// requires users to specify an egress provider by preventing the default HTTP response for logs, traces, dumps, gcdumps, and live metrics
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/egress.md#disabling-http-egress
'--no-http-egress'
// do not require authentication => do not use in production
// https://github.com/dotnet/dotnet-monitor/blob/37d10517d7f85294634a0404fabf61fb3532fb0d/documentation/authentication.md#disabling-authentication
'--no-auth'
]
volumeMounts: [
{
volumeName: 'dotnet-monitor-volume'
mountPath: '/dotnet-monitor'
}
]
// https://github.com/dotnet/dotnet-monitor/blob/3498f108bd0248e4f7aeb649364e3e02fed6fb94/documentation/docker-compose.md#recommended-container-limits
resources: {
cpu: json('0.25')
memory: '0.5Gi'
}
}
// interactive shell container
{
name: 'azurelinux'
// https://mcr.microsoft.com/en-us/artifact/mar/azurelinux/base/core/about
image: 'mcr.microsoft.com/azurelinux/base/core:3.0'
env: [
{
name: 'PATH'
value: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.dotnet/tools'
}
]
command: [
'/bin/bash'
'-c'
join([
'tdnf update -y && tdnf -y install ca-certificates jq dotnet-sdk-9.0 powershell'
// https://learn.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-counters#install
'dotnet tool install --global dotnet-counters'
// https://learn.microsoft.com/en-us/dotnet/core/diagnostics/dotnet-trace#install
'dotnet tool install --global dotnet-trace'
'sleep infinity'
], '\n')
]
volumeMounts: [
{
volumeName: 'dotnet-monitor-volume'
mountPath: '/dotnet-monitor'
}
{
volumeName: 'tmp-volume'
mountPath: '/tmp'
}
{
volumeName: 'diagnostics-volume'
mountPath: '/dotnet-diagnostics'
}
]
resources: {
cpu: json('0.25')
memory: '0.5Gi'
}
}
]
scale: {
minReplicas: 1
maxReplicas: 1
}
}
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment