Distributed Tracing for C# .NET Agents
🎯 What You Will Build
Production observability for an agent runtime. Every orchestration run, every per-agent invocation, and every delegation surfaces as a distributed-trace span with stable tag names. Plug the spans into OpenTelemetry (Jaeger, Tempo, Datadog, Application Insights) or consume them locally with a System.Diagnostics.ActivityListener.
This guide ships a single self-contained Program.cs you can paste into a fresh console project and run. It demonstrates:
- The
LMKit.Agents.Observability.AgentDiagnosticsActivitySourceintroduced in LM-Kit.NET 2026.5.2. - Standard tag names emitted on every span:
agent.name,orchestrator.name,orchestration.step,agent.planning_strategy,agent.status,agent.inference_count,delegation.from,delegation.to. - A subscribed
ActivityListenerthat prints each span as it ends.
The existing in-process IAgentTracer system (in-memory tracer, console tracer, JSON exporter) is unchanged and complementary. Agent tracers receive structured per-iteration events tied to the planning loop, while ActivitySource emits coarser spans suitable for cross-process correlation.
✅ Prerequisites
- .NET 8.0 or later.
- LM-Kit.NET 2026.5.2 or later.
- Roughly 6 GB of free VRAM (CPU inference works too, just slower).
- The
qwen3.5:9bmodel. It is downloaded on first run from the HuggingFace mirror.
Add the LM-Kit.NET package to a new console project:
dotnet new console -n AgentTracingDemo
cd AgentTracingDemo
dotnet add package LM-Kit.NET --prerelease
📦 Complete Copy-Paste Program
Replace the contents of Program.cs with the snippet below. It compiles as-is and runs end to end.
using System;
using System.Diagnostics;
using System.Text;
using System.Threading.Tasks;
using LMKit.Agents;
using LMKit.Agents.Observability;
using LMKit.Agents.Orchestration;
using LMKit.Model;
using LMKit.TextGeneration.Chat;
using LMKit.TextGeneration.Sampling;
internal static class Program
{
private static async Task Main()
{
Console.OutputEncoding = Encoding.UTF8;
// Optional: license key. Trial mode runs without one.
// LMKit.Licensing.LicenseManager.SetLicenseKey("YOUR-KEY-HERE");
using var listener = SubscribeToAgentSpans();
Console.WriteLine("Loading qwen3.5:9b...");
using var model = LM.LoadFromModelID("qwen3.5:9b");
Console.WriteLine("Loaded.\n");
var researcher = Agent.CreateBuilder(model)
.WithPersona("Researcher")
.WithInstruction("List 2 short bullet facts about the topic.")
.Build();
var writer = Agent.CreateBuilder(model)
.WithPersona("Writer")
.WithInstruction("Turn the bullets into one concise sentence.")
.Build();
var orchestrator = new PipelineOrchestrator()
.AddStage("research", researcher)
.AddStage("write", writer);
var options = new OrchestrationOptions
{
SamplingMode = new GreedyDecoding(),
MaxCompletionTokens = 96,
ReasoningLevel = ReasoningLevel.None,
StopOnFailure = true
};
Console.WriteLine("Running pipeline (research -> write)...\n");
var result = await orchestrator.ExecuteAsync("Topic: edge AI", options);
Console.WriteLine();
Console.WriteLine("Final output:");
Console.WriteLine(result.Content?.Trim());
}
private static ActivityListener SubscribeToAgentSpans()
{
var listener = new ActivityListener
{
ShouldListenTo = src => src.Name == AgentDiagnostics.SourceName,
Sample = (ref ActivityCreationOptions<ActivityContext> _) => ActivitySamplingResult.AllData,
ActivityStopped = activity =>
{
string agent = (activity.GetTagItem("agent.name") as string) ?? "-";
string orchestrator = (activity.GetTagItem("orchestrator.name") as string) ?? "-";
string status = activity.Status.ToString();
Console.WriteLine(
$"[span] {activity.OperationName} " +
$"{activity.Duration.TotalMilliseconds:F0}ms " +
$"agent={agent} orchestrator={orchestrator} status={status}");
}
};
ActivitySource.AddActivityListener(listener);
return listener;
}
}
Run it:
dotnet run -c Release
You should see one orchestration.execute span at the end of the run, plus one agent.execute span per stage. Each span carries duration, status, and the standard tags.
📡 Spans Emitted
| Span name | Kind | Where | Tags |
|---|---|---|---|
orchestration.execute |
Internal | Top of IOrchestrator.ExecuteAsync |
orchestrator.name, orchestration.agent_count, orchestration.inference_count |
agent.execute |
Internal | Per-agent invocation inside an orchestration | agent.name, orchestrator.name, orchestration.step, agent.planning_strategy, agent.status, agent.inference_count |
agent.delegate |
Internal | Inside DelegateTool.InvokeAsync when a supervisor delegates |
delegation.from, delegation.to |
Every span ends with an ActivityStatusCode of Ok or Error (with the failure description), so downstream tooling can filter and alert on broken runs without parsing tags.
🚀 Production: Wire Into OpenTelemetry
Add the standard OpenTelemetry packages to your project, then register the agent activity source alongside any other source you already export:
dotnet add package OpenTelemetry
dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol
using OpenTelemetry;
using OpenTelemetry.Resources;
using OpenTelemetry.Trace;
using LMKit.Agents.Observability;
using var tracerProvider = Sdk.CreateTracerProviderBuilder()
.SetResourceBuilder(ResourceBuilder.CreateDefault().AddService("my-agent-service"))
.AddSource(AgentDiagnostics.SourceName) // LM-Kit agent runtime
.AddSource("MyApp") // your own ActivitySources
.AddOtlpExporter() // OTLP to Jaeger, Tempo, Datadog, etc.
.Build();
// All subsequent agent / orchestration calls export spans to the tracer.
var result = await orchestrator.ExecuteAsync("Run the daily summary.");
AgentDiagnostics.SourceName is the constant "LMKit.Agents". Use it instead of a string literal so you do not drift if the name ever changes.
The tags emitted by LM-Kit follow OpenTelemetry-style naming (agent.name, orchestrator.name, etc.), so they index cleanly in Jaeger, Tempo, and Datadog without remapping.
🔍 What You Will See
For a SupervisorOrchestrator that delegates to two workers, the trace tree looks like:
orchestration.execute (orchestrator.name=SupervisorOrchestrator)
├── agent.execute (agent.name=Supervisor, orchestration.step=1, agent.planning_strategy=ReAct)
│ ├── agent.delegate (delegation.from=Supervisor, delegation.to=writer)
│ └── agent.delegate (delegation.from=Supervisor, delegation.to=editor)
Each span carries duration, status, and the standard tags. That is enough for cost dashboards, latency SLOs, and failure-rate alerts without instrumenting a single line of your own code.
🎛 Filtering, Sampling, and Correlation
Standard ActivityListener semantics apply:
// Sample only 10% of orchestration runs to reduce export volume.
listener.Sample = (ref ActivityCreationOptions<ActivityContext> opts) =>
opts.Name == "orchestration.execute" &&
Random.Shared.NextDouble() < 0.10
? ActivitySamplingResult.AllData
: ActivitySamplingResult.None;
// Force-record specific orchestrators.
listener.Sample = (ref ActivityCreationOptions<ActivityContext> opts) =>
{
var orchestrator = opts.Tags.FirstOrDefault(t => t.Key == "orchestrator.name").Value as string;
return orchestrator == "SupervisorOrchestrator"
? ActivitySamplingResult.AllData
: ActivitySamplingResult.PropagationData;
};
Spans inherit the ambient Activity.Current from your application code, so an HTTP request that triggers an agent has the orchestration spans nested under the request's trace automatically.
🤝 Pairing with the Existing IAgentTracer System
The two systems are independent and complementary:
| Mechanism | Granularity | Cross-process |
|---|---|---|
LMKit.Agents.Observability.AgentDiagnostics (ActivitySource) |
Coarse spans for orchestration, agent execution, delegation | ✓ Distributed-trace correlation |
LMKit.Agents.Observability.IAgentTracer (InMemoryTracer, ConsoleTracer, CompositeTracer, JsonTraceExporter) |
Fine-grained per-iteration events tied to the planning loop | ✗ In-process only |
Use the agent tracer when you want to inspect planning iterations, tool calls, and reasoning traces in detail (for example, during agent-development debugging). Use the activity source for cross-service production tracing. Both can run side-by-side; neither preempts the other.
✅ Production-Readiness Checklist
When shipping an agent service, this is the smallest reasonable observability footprint:
- [ ] Register
AgentDiagnostics.SourceNamewith your OpenTelemetry tracer provider. - [ ] Set a service name on the resource (
ResourceBuilder.AddService("...")). - [ ] Export to your tracing backend (OTLP, Jaeger, Tempo, Datadog, Application Insights).
- [ ] Sample at a rate appropriate to your span volume. Per-orchestration cost differs significantly from per-agent invocation cost.
- [ ] Add latency and failure-rate alerts on
orchestration.executeandagent.executespans. - [ ] Confirm cross-trace correlation by triggering an agent from an HTTP endpoint and verifying the orchestration span nests under the request span.
Done. Every agent run is now visible alongside the rest of your service traces.