Table of Contents

Interface IConversation

Namespace
LMKit.TextGeneration
Assembly
LM-Kit.NET.dll

Represents a conversation interface for interacting with a text generation model. Provides methods for submitting prompts (sync/async), and exposes lifecycle events for token sampling and completion post-processing. Also surfaces key configuration controls, including system prompt, sampling strategy, repetition penalties, and reasoning-level controls.

public interface IConversation

Properties

MaximumCompletionTokens

Defines the maximum number of tokens permitted for the assistant completion per turn. Set to -1 to remove the cap (subject to model/context limits).

Model

Gets the LM instance associated with this conversation. Useful for inspecting capabilities (e.g., tool-calls, reasoning support).

ReasoningLevel

Controls how (and whether) intermediate "reasoning"/"thinking" content is produced and/or exposed.

Use None to fully disable reasoning. Higher levels hint the model to allocate more budget to chain-of-thought style tokens when the model supports it. Actual behavior depends on model and chat template capabilities.

Suggested semantics:
LevelIntended behavior
NoneNo reasoning tokens requested or exposed.
LowMinimal reasoning; terse scratch space when helpful.
MediumBalanced reasoning (default when enabled).
HighMaximize reasoning depth; may trade off speed.
RepetitionPenalty

A RepetitionPenalty specifying rules that discourage repeating recent n-grams/tokens. Typically disabled automatically when strict Grammar-like constraints are in use.

SamplingMode

A TokenSampling object specifying the sampling strategy followed during text completion (e.g., temperature, top-p, top-k, dynamic sampling).

StopSequences

Specifies sequences that cause generation to stop immediately when encountered. Matching stop sequences are not included in the final output.

SystemPrompt

Gets or sets the system prompt that is applied to the model before processing the user's request. Set this before the first user message for deterministic behavior across the session.

Methods

Submit(string, CancellationToken)

Submits a prompt to the text generation model and returns the result.

SubmitAsync(string, CancellationToken)

Asynchronously submits a prompt to the text generation model and returns the result.

Events

AfterTextCompletion

Event triggered following the execution of a text completion. Use to inspect the final assistant output and optionally influence post-processing.

AfterTokenSampling

Event triggered just after the generation of a token. Enables detailed modifications to be made to the token selection process via AfterTokenSamplingEventArgs.

BeforeTokenSampling

Event triggered just before the generation of a token. Allows for precise adjustments to the token sampling process via BeforeTokenSamplingEventArgs.