Class DeviceConfiguration
Provides methods for configuring device-related settings based on GPU hardware capabilities.
public static class DeviceConfiguration
- Inheritance
-
DeviceConfiguration
- Inherited Members
Examples
Example: Get optimal context size for model loading
using LMKit.Hardware;
using LMKit.Model;
using System;
// Get optimal context size based on available GPU memory
int contextSize = DeviceConfiguration.GetOptimalContextSize();
Console.WriteLine($"Recommended context size: {contextSize} tokens");
// Load model with optimal settings
LM model = LM.LoadFromModelID("llama-3.2-1b");
int modelContextSize = DeviceConfiguration.GetOptimalContextSize(model);
Console.WriteLine($"Context size for this model: {modelContextSize} tokens");
Example: Check GPU layer count for a model
using LMKit.Hardware;
using LMKit.Model;
using System;
// Get model card to check hardware compatibility
ModelCard card = ModelCard.GetPredefinedModelCardByModelID("llama-3.2-3b");
// Calculate optimal GPU layers based on available memory
int gpuLayers = DeviceConfiguration.GetOptimalGpuLayerCount(card);
Console.WriteLine($"Recommended GPU layers: {gpuLayers}");
// Check expected performance score (0.0 to 1.0)
float score = DeviceConfiguration.GetPerformanceScore(card);
Console.WriteLine($"Expected performance: {score:P0}");
Remarks
This class helps determine optimal configuration values for graphics-related tasks by inspecting available GPU resources. Currently, it focuses on selecting a suitable GPU context size based on the available free GPU memory.
Methods
- GetOptimalContextSize()
Determines the optimal GPU context size based on the currently available GPU device's free memory.
- GetOptimalContextSize(LM)
Determines the optimal GPU context size based on the currently available GPU device's free memory, with an option to cap this size according to the specified language model.
- GetPerformanceScore(LM)
Calculates a performance score for the provided language model based on the GPU device's total memory and the model's estimated memory requirements.
- GetPerformanceScore(ModelCard)
Calculates a performance score for the provided model card based on the GPU device's total memory and the model's file size.
- GetPerformanceScore(string)
Calculates a performance score for the language model located at the specified file path, based on the GPU device's total memory and the model's estimated memory requirement.