Class MultiTurnConversation
- Namespace
- LMKit.TextGeneration
- Assembly
- LM-Kit.NET.dll
A class specifically designed to handle multi-turn question-answering scenarios.
public sealed class MultiTurnConversation : IConversation, IDisposable
- Inheritance
-
MultiTurnConversation
- Implements
- Inherited Members
Constructors
- MultiTurnConversation(LLM, ChatHistory, int)
Initializes a new instance of the MultiTurnConversation class with a specified language model and existing chat history.
- MultiTurnConversation(LLM, byte[])
Initializes a new instance of the MultiTurnConversation class, restoring a previous conversation session from session data.
- MultiTurnConversation(LLM, int)
Initializes a new instance of the MultiTurnConversation class with a specified language model.
- MultiTurnConversation(LLM, string)
Initializes a new instance of the MultiTurnConversation class, restoring a previous conversation session from a file.
Properties
- ChatHistory
Gets the complete history of the chat session.
- ContextRemainingSpace
Gets the current number of tokens that can still fit into the model's context before it reaches its maximum capacity.
- ContextSize
Specifies the size of the context associated with this instance.
- Grammar
Gets or sets the Grammar object used to enforce grammatical rules during text generation. This ensures controlled and structured output from the model.
- InferencePolicies
Represents the set of policies governing inference operations, including how to handle input length overflow and other inference-specific settings.
- LogitBias
A LogitBias object designed to adjust the likelihood of particular tokens (or text chunks) appearing during text completion.
- MaximumCompletionTokens
Defines the maximum number of tokens (text chunks) permitted for text completion or generation.
- Model
Gets the Model instance associated with this object.
- RepetitionPenalty
A RepetitionPenalty object specifying the rules for repetition penalties applied during text completion.
- SamplingMode
A SamplingMode object specifying the sampling strategy followed during text completion.
- StopSequences
Specifies a set of sequences for which the API will stop generating additional tokens (or text chunks). The resultant text completion will exclude any occurrence of the specified stop sequences.
- SystemPrompt
Specifies the system prompt applied to the model before forwarding the user's requests.
Methods
- ClearHistory()
Resets the conversation history, effectively beginning a fresh session.
- ContinueLastAssistantResponse(CancellationToken)
Continues generating additional content for the last assistant's response.
- ContinueLastAssistantResponseAsync(CancellationToken)
Asynchronously continues generating additional content for the last assistant's response.
- Dispose()
Ensures the release of this instance and the complete removal of all associated unmanaged resources.
- RegenerateResponse(CancellationToken)
Requests the model to generate a new response to the previous inquiry.
- RegenerateResponseAsync(CancellationToken)
Requests the model to generate a new response to the previous inquiry.
- SaveSession()
Saves the current chat session state and returns it as a byte array.
- SaveSession(string)
Saves the current chat session state to a specified file path.
- Submit(string, CancellationToken)
Prompts the model with an arbitrary request.
- SubmitAsync(string, CancellationToken)
Prompts the model with an arbitrary request.
Events
- AfterTextCompletion
This event is triggered following the execution of a text completion.
- AfterTokenSampling
This event is triggered just after the generation of a token. The provided AfterTokenSamplingEventArgs argument enables detailed modifications to the token selection process.
- BeforeTokenSampling
This event is triggered just before the generation of a token. The provided BeforeTokenSamplingEventArgs argument allows for precise adjustments to the token sampling process.