Enum PlanningStrategy
Specifies the reasoning and planning approach an agent uses to decompose and solve tasks.
public enum PlanningStrategy
Fields
None = 0No explicit planning strategy. The agent responds directly without structured reasoning steps.
This is the fastest strategy with the lowest token overhead. The model generates a response directly based on the input without explicit intermediate reasoning steps.
Best For:
- Simple factual questions
- Conversational responses
- Tasks where the model already incorporates reasoning (e.g., reasoning-optimized models)
- High-throughput scenarios where latency matters
ChainOfThought = 1Chain-of-Thought (CoT) prompting. The agent reasons through problems step-by-step before providing a final answer.
Chain-of-Thought prompting encourages the model to break down complex problems into intermediate reasoning steps. This significantly improves performance on arithmetic, commonsense reasoning, and symbolic manipulation tasks.
How It Works:
The agent generates explicit reasoning steps (e.g., "First, I need to...", "This means that...", "Therefore...") before arriving at the final answer.Best For:
- Mathematical word problems
- Logical deduction and inference
- Multi-step analysis
- Tasks requiring explanation of reasoning
Trade-offs: Increases response length and latency but significantly improves accuracy on reasoning tasks.
ReAct = 2ReAct (Reasoning + Acting) pattern. The agent interleaves reasoning traces with tool actions in a Thought-Action-Observation loop.
ReAct combines reasoning with action-taking, allowing the agent to think about what it observes from tool calls and decide on next steps dynamically. This creates a feedback loop: Thought -> Action -> Observation -> Thought -> ...
The ReAct Loop:
- Thought: The agent reasons about the current state and what action to take.
- Action: The agent calls a tool with specific arguments.
- Observation: The agent receives and interprets the tool's output.
- Repeat: Continue until the task is complete.
Best For:
- Agents with external tools (search, databases, APIs)
- Tasks requiring dynamic adaptation based on intermediate results
- Information gathering and synthesis
- Multi-step workflows with decision points
PlanAndExecute = 3Plan-and-Execute strategy. The agent first generates a complete plan, then executes each step sequentially.
This strategy separates planning from execution. The agent first creates a detailed plan outlining all steps needed to complete the task, then executes each step in order. The plan can be revised if execution reveals issues.
Two Phases:
- Planning: Decompose the task into discrete, actionable steps.
- Execution: Execute each step, potentially revising the plan if needed.
Best For:
- Complex multi-step tasks
- Tasks with clear sequential dependencies
- Project planning and task decomposition
- Workflows where seeing the full plan upfront is valuable
Trade-offs: Higher upfront latency for planning, but more coherent execution of complex tasks.
Reflection = 4Reflection-based reasoning. The agent generates an initial response, then critically evaluates and refines it through self-reflection.
Reflection prompting adds a self-critique and revision phase after the initial response. The agent examines its own output for errors, inconsistencies, or improvements, then produces a refined version.
Process:
- Generate: Produce an initial response.
- Reflect: Critically evaluate the response for issues.
- Refine: Produce an improved version addressing identified issues.
Best For:
- Tasks where accuracy is critical
- Writing and editing tasks
- Code review and bug detection
- Fact-checking and verification
Trade-offs: Significantly increases latency (2-3x) but improves output quality by catching errors.
TreeOfThought = 5Tree-of-Thought (ToT) exploration. The agent explores multiple reasoning paths in parallel, evaluates intermediate states, and selects the most promising branches.
Tree-of-Thought extends chain-of-thought by exploring multiple reasoning paths simultaneously, like a tree search. The agent generates several candidate thoughts at each step, evaluates them, and continues with the most promising branches.
Process:
- Generate: Produce multiple candidate reasoning steps.
- Evaluate: Score each candidate's promise.
- Select: Continue with the best candidates.
- Backtrack: Abandon unpromising paths and explore alternatives.
Best For:
- Problems with multiple valid solution approaches
- Puzzles and games (e.g., 24 game, crosswords)
- Creative tasks with many possible directions
- Tasks where exploration improves solution quality
Trade-offs: Highest computational cost but can find better solutions for complex problems where linear reasoning gets stuck.
Examples
Example: Selecting planning strategies for different tasks
using LMKit.Agents;
using LMKit.Model;
using var model = new LM("path/to/model.gguf");
// Simple Q&A: no planning needed
var chatAgent = Agent.CreateBuilder()
.WithModel(model)
.WithPersona("Assistant")
.WithPlanning(PlanningStrategy.None)
.Build();
// Math and reasoning: chain-of-thought improves accuracy
var mathAgent = Agent.CreateBuilder()
.WithModel(model)
.WithPersona("MathTutor")
.WithPlanning(PlanningStrategy.ChainOfThought)
.Build();
// Tool-using research agent: ReAct for observe-think-act cycles
var researchAgent = Agent.CreateBuilder()
.WithModel(model)
.WithPersona("Researcher")
.WithPlanning(PlanningStrategy.ReAct)
.WithTools(registry => registry.Register(new WebSearchTool()))
.Build();
// Complex project planning: decompose then execute
var plannerAgent = Agent.CreateBuilder()
.WithModel(model)
.WithPersona("ProjectPlanner")
.WithPlanning(PlanningStrategy.PlanAndExecute)
.Build();
Remarks
Planning strategies determine how an agent structures its internal reasoning process before producing a response. Different strategies offer trade-offs between speed, accuracy, and the ability to handle complex multi-step tasks.
Strategy Selection Guidelines
| Strategy | Best For |
|---|---|
| None | Simple Q&A, conversational responses, or when the model handles reasoning internally. |
| ChainOfThought | Math problems, logical reasoning, and multi-step analysis tasks. |
| ReAct | Tool-using agents that need to observe results and adapt their approach. |
| PlanAndExecute | Complex tasks requiring upfront decomposition into discrete steps. |
| Reflection | Tasks where accuracy is critical and self-correction improves quality. |
| TreeOfThought | Problems with multiple solution paths where exploration finds better answers. |
Performance Considerations
More sophisticated strategies (Reflection, TreeOfThought) typically increase latency and
token consumption. Choose the simplest strategy that meets your quality requirements.