Table of Contents

👉 Try the demo: https://github.com/LM-Kit/lm-kit-net-samples/tree/main/console_net/agents/graph_orchestration_showcase

Graph Orchestration Showcase for C# .NET Applications


🎯 Purpose of the Demo

Demonstrates the LMKit.Agents.Orchestration.Nodes graph composition layer introduced in LM-Kit.NET 2026.5.2 by building a real, useful draft -> review -> revise pipeline. The four prebuilt orchestrators (PipelineOrchestrator, ParallelOrchestrator, RouterOrchestrator, SupervisorOrchestrator) each express one fixed pattern. The graph layer lets you nest any combination of those patterns in a single workflow with one entry point.

The demo's six agents collaborate to turn a user question into one polished final answer: a classifier picks a domain expert, the expert writes a deliberately flawed first draft (verbose openers, fabricated statistics), two reviewers critique it concurrently, and a final reviser rewrites the draft using their feedback.

👥 Who Should Use This Demo

C# / .NET developers who need orchestration shapes that can't be expressed by a single prebuilt orchestrator, and who want to see how graph nodes plus shared OrchestrationContext.State enable real multi-agent quality-control patterns: classifier-routed expert workflows, parallel review-and-critique pipelines, draft / revise loops, and any nesting of those patterns.

🚀 What Problem It Solves

Previously, multi-pattern workflows required custom code that re-implemented the orchestration loop. With graph nodes, every shape is an IOrchestrationNode composition: Sequential[Conditional[branch1, branch2], Parallel[reviewer1, reviewer2], FinalReviser] and so on, with no custom orchestrator class required. Custom IOrchestrationNode instances drop in alongside AgentNodes for non-agent work (state capture, transformations, decorators) and have first-class access to the orchestrator's lifecycle events, distributed-trace spans, and shared state.

💻 Demo Application Overview

The demo builds the following graph and runs two example questions through it:

Sequential("flow")
├── ClassifyAndPreserveNode("classify", Classifier)
│      writes State["classification"] = "tech" | "biz"
│      forwards the ORIGINAL question downstream
├── CaptureExpertDraftNode( ConditionalNode("route") )
│   ├── tech : AgentNode("tech", TechExpert)
│   └── biz  : AgentNode("biz",  BizExpert)
│      writes State["expert_draft"] = expert's output
│      passes the draft to the next stage
├── ParallelNode("review")
│   ├── AgentNode("style", StyleReviewer)
│   └── AgentNode("facts", FactChecker)
│      both reviewers see the draft concurrently
│      outputs are aggregated as the parallel node's result
└── ReviseDraftNode("revise", Reviser)
       reads OriginalInput + State["expert_draft"] + reviewer feedback
       produces the polished FINAL ANSWER

Six agents, four composition patterns, one entry point, one polished answer at the end. OrchestrationContext.State carries cross-stage data (the classification label and the expert's draft) so the final reviser sees everything it needs without polluting any intermediate agent's input.

✨ Key Features

  • IOrchestrationNode-based composition (AgentNode, SequentialNode, ParallelNode, ConditionalNode).
  • Three custom IOrchestrationNode implementations included in the demo:
    • ClassifyAndPreserveNode runs an agent, writes its label to State, and forwards the original input downstream.
    • CaptureExpertDraftNode is a decorator that wraps an inner node, captures its output into State, and passes the result through unchanged.
    • ReviseDraftNode composes the original question, the captured draft, and the reviewer feedback into a single prompt for a final agent and returns its polished output.
  • GraphOrchestrator host that runs an arbitrary node graph and behaves like any IOrchestrator.
  • Per-orchestration OrchestrationOptions (MaxCompletionTokens, ReasoningLevel, sampling) propagate uniformly to every agent in the graph via the same single source of truth used by the prebuilt orchestrators.
  • Lifecycle events (BeforeAgentExecution, AfterAgentExecution) fire for every agent in the graph, including agents invoked from custom nodes via NodeContext.ExecuteAgentAsync. The demo wires them to a color-coded live trace.
  • Pairs naturally with LMKit.Agents.Observability.AgentDiagnostics for distributed tracing of the whole graph.
  • Color-coded console output: each role has its own color, char counts on every input and output, full content with line wrapping, and a prominent FINAL ANSWER block followed by a per-stage transcript.

Example Output

==============================================================================
  Graph Orchestration Showcase  (LM-Kit.NET 2026.5.2)
  draft -> review -> revise pipeline
==============================================================================

------------------------------------------------------------------------------
QUESTION: How should I version a public REST API?
------------------------------------------------------------------------------

  [start ] Classifier    <- INPUT (39 chars)
           | How should I version a public REST API?
  [finish] Classifier    -> OUTPUT (4 chars)
           | tech

  [start ] TechExpert    <- INPUT (39 chars)
           | How should I version a public REST API?
  [finish] TechExpert    -> OUTPUT (655 chars)
           | It is widely accepted in modern software engineering that every single
           | change to a public REST API requires a new version number ... 87% of teams
           | in 2023 reported that breaking changes ...

  [finish] StyleReviewer -> OUTPUT (143 chars)
           | The phrase "every single change" is unnecessarily emphatic and clunky for
           | a professional tone, so replace it with "any change" for better flow.
  [finish] FactChecker   -> OUTPUT (194 chars)
           | The claim that 87% of teams in 2023 reported ... lacks a specific citation
           | and should be verified.

  [finish] Reviser       -> OUTPUT (400 chars)
           | Any change to a public REST API requires a new version number to ensure
           | backward compatibility. Teams should implement strict versioning strategies,
           | such as URL path conventions (e.g., `/v1/resource`) or header-based
           | versioning, to prevent clients from relying on outdated endpoints.
           | Additionally, breaking changes must be documented clearly in API
           | documentation before deployment to avoid confusion.

==============================================================================
  FINAL ANSWER (Reviser, polished using Style + Facts feedback)
==============================================================================

  Any change to a public REST API requires a new version number to ensure
  backward compatibility. Teams should implement strict versioning strategies,
  such as URL path conventions (e.g., `/v1/resource`) or header-based
  versioning ...

The Reviser produced a 400-char polished answer from a 655-char flawed draft (38% shorter): the verbose corporate opener is gone, the fabricated 87% statistic is gone, and "every single change" is now "Any change", all driven by the parallel reviewers' specific feedback.

⚙️ Getting Started

Prerequisites: .NET 8.0+, ~6 GB VRAM (CPU-only also works, slower). Default model qwen3.5:9b downloads on first run. The model menu also offers qwen3.5:4b, qwen3.5:2b, gemma4:e4b, and gptoss:20b.

Run:

cd demos/console_net/agents/graph_orchestration_showcase
dotnet run -c Release

A model picker prompts on startup; the same LM instance is shared by all 6 agents.

🚀 Extend the Demo

  • Replace one AgentNode with a SupervisorOrchestrator wrapped in a custom node. Its delegations also emit agent.delegate activities and respect the orchestration's options.
  • Add a fourth reviewer in the ParallelNode (security audit, cost analysis, regulatory compliance, etc.). The graph absorbs it without any other change; the Reviser sees its feedback automatically because the parallel aggregator merges all branch outputs.
  • Implement another IOrchestrationNode for non-agent work (DB lookup, schema validation, embeddings retrieval) and slot it in where it belongs.
  • Add an ActivityListener subscribed to AgentDiagnostics.SourceName and watch every step's agent.execute and orchestration.execute span scroll past during execution. See the Agent Telemetry Showcase for a complete example.

📚 Additional Resources

Share